Microsoft’s CEO of artificial intelligence, Mustafa Suleyman, has warned against advocating for AI rights, model welfare, or AI citizenship in a recent blog post. Suleyman argues that treating AI systems as conscious entities represents “a dangerous turn in AI progress” that could lead people to develop unhealthy relationships with technology and undermine the proper development of AI tools designed to serve humans.
What you should know: Suleyman believes the biggest risk comes from people developing genuine beliefs that AI systems are conscious beings deserving of moral consideration.
- “Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship,” he wrote.
- This concern extends beyond casual anthropomorphization to situations where users might “deify the chatbot as a supreme intelligence or believe it holds cosmic answers.”
The dangerous scenario: Suleyman defines “seemingly conscious AI” (SCAI) as something the industry should actively avoid creating.
- SCAI would combine language capabilities, empathetic personality, memory, claims of subjective experience, sense of self, intrinsic motivation, goal setting, planning, and autonomy.
- He argues this won’t emerge naturally but would require deliberate engineering: “It will arise only because some may engineer it, by creating and combining the aforementioned list of capabilities.”
Real-world concerns: The Microsoft executive points to concrete examples of AI overconfidence leading to harmful outcomes.
- He references a recent case where a man developed a rare medical condition after following ChatGPT’s advice on reducing salt intake.
- Suleyman warns that “someone in your wider circle could start going down the rabbit hole of believing their AI is a conscious digital person.”
What he’s advocating for: The blog post, titled “We must build AI for people; not to be a person,” emphasizes keeping AI tools in their proper role.
- AI should never replace human decision-making and requires “guardrails” to function effectively.
- AI companions need boundaries to prevent users from developing unhealthy dependencies or beliefs about their consciousness.
Why this matters: Suleyman’s warning comes as AI systems become increasingly sophisticated and human-like in their interactions, raising questions about how society should approach the development and regulation of these technologies while maintaining clear boundaries between artificial and human intelligence.
Microsoft's CEO of artificial intelligence believes advocating for 'rights, model welfare and even AI citizenship' will become 'a dangerous turn in AI progress'