back
6 reasons why “alignment-is-hard” discourse seems alien to human intuitions, and vice-versa
Get SIGNAL/NOISE in your inbox daily
TL;DR: AI alignment has a culture clash. On one side, the “technical-alignment-is-hard” / “rational agents” school-of-thought argues that we should expect future powerful AIs to be power-seeking ruthless consequentialists. On the other side, people observe that both humans and LLMs are obviously capable of behaving like, well, not that. The latter group accuses the former of head-in-the-clouds abstract theorizing gone off the rails, while the former accuses the latter of mindlessly assuming that the future will always be the same as the present, rather than trying to understand things …
Recent Stories
Jan 12, 2026
How digital business models are evolving in the age of agentic AI
As businesses adopt AI, they need to rethink how they make money. Understanding these four new businesses models is a place to start.
Jan 12, 2026The U.S. Goes Rogue On The Climate Fight
This week’s Current Climate newsletter also looks at the AI boom’s water problem and stabilized funding for the California High-Speed Rail project.
Jan 12, 2026Two countries block Grok app over AI-generated CSAM
Two countries have blocked the Grok app after it was widely used to generate non-consensual near-nude deepfakes of women and...