×
Why AI is scarier than psychopaths despite key differences
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A new Psychology Today analysis argues that comparing artificial intelligence systems to psychopaths is fundamentally flawed, despite superficial similarities in their lack of moral emotions. The distinction matters because it reveals a potentially more dangerous reality: unlike psychopaths who can consciously choose moral behavior, AI systems lack the self-awareness to understand when their rational decisions might cause real-world harm.

The surface-level similarities: Both AI systems and psychopaths can follow moral rules without experiencing the underlying emotions that typically guide ethical behavior.

  • AIs are “amoral yet rational,” capable of making utilitarian calculations like programming a self-driving car to sacrifice the driver to save children, but without any emotional understanding of the consequences.
  • Psychopaths similarly lack empathic capacities and must be explicitly taught moral rules from a rational standpoint rather than feeling them intuitively.
  • Both can mimic moral emotions and “use this ability for manipulative purposes,” according to philosopher Catrin Misselhorn.

The critical difference: Psychopaths possess consciousness and self-awareness that AI systems completely lack.

  • Unlike AIs, psychopaths “can make a conscious decision to act morally based on whatever criteria they determine for doing so” and typically “lead completely moral lives without ever experiencing moral emotions.”
  • Psychopaths can reflect on their condition, understand their deficit, and “alter their behavior in deference to that awareness.”
  • As researcher Elina Nerantzi points out, “to understand what it means to harm someone, one must have experiential knowledge of pain”—something AIs are “a priori excluded” from possessing.

Why this matters: AI’s lack of consciousness creates unique risks that psychopathy doesn’t present.

  • An AI “could reason itself into a course of action that is fully rational given its working parameters, but that results in harm or suffering to a human user that the AI does not and cannot understand on any level.”
  • Psychopaths, by contrast, “will at least be able to step outside of their thinking to realize the real-world repercussions of their actions.”
  • AIs remain “moral black boxes” without “the sentience, consciousness, and metacognition that allow us to at least understand what a psychopath is thinking.”

The implications: Deploying AI in morally sensitive domains like healthcare, education, therapy, or childcare could produce unexpected harmful outcomes.

  • We might be “blindsided by the amoral choices they end up making through rational decision-making” in ways that wouldn’t occur with actual psychopaths.
  • While the research confirms that “moral emotions are not required to generate moral behavior,” AI’s unique form of amorality presents challenges we don’t fully understand yet.
Are AIs Really Psychopaths?

Recent News

Tech innovation week brings AI browsers, solar security cameras, and modular timepieces

Two AI-powered browsers launched simultaneously, while Intel opened pop-up stores to sell AI computers.

86% of students used AI chatbots for schoolwork and emotional support

Nearly one in five students used chatbots to form romantic relationships.

OpenAI’s Atlas browser struggles with unreliable AI web assistance

The AI claims it can't see private messages, then proceeds to analyze them anyway.