Over 200 world leaders, Nobel laureates, and industry experts have co-signed an open letter demanding international consensus on AI safety measures by the end of 2025. The petition, released during the UN General Assembly, calls for “clear and verifiable red lines” to prevent “universally unacceptable risks” from artificial intelligence development.
What they’re saying: The letter emphasizes the urgent need for binding international agreements on AI safety protocols.
• “An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks,” the letter states, adding that safeguards should build upon “existing global frameworks and voluntary corporate commitments.”
Who signed on: The petition attracted high-profile signatories from across technology and academia.
• Physics Nobel laureate Geoffrey Hinton, who has become increasingly vocal about AI risks after leaving Google.
• OpenAI co-founder Wojciech Zaremba, representing one of the leading AI development companies.
• Former Irish President Mary Robinson, bringing political leadership perspective to the initiative.
The big picture: This latest appeal represents part of a sustained but largely ineffective campaign by researchers and academics to slow AI development for safety considerations.
• The letter joins a “yearslong series of appeals from some of technology’s best minds — in the form of papers, petitions, and press conferences that increasingly feel like experts screaming into the void.”
• While the European Union has implemented AI regulations, the countries developing the most powerful AI models—the United States and China—show no signs of pausing their aggressive development approaches.
Why this matters: The petition highlights the growing disconnect between AI safety advocates and the geopolitical reality of AI competition.
• Despite mounting concerns from leading researchers, major AI powers appear unlikely to implement the kind of binding restrictions experts are demanding.
• The timing during the UN General Assembly represents an attempt to elevate AI safety to the level of international diplomatic priority, though success remains uncertain given current competitive dynamics.