×
No, that’s not Brad Pitt: AI-powered romance scams cost UK victims £106M in 2024
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Digital fraud has evolved far beyond the simple email scams of the early internet era. Today’s cybercriminals wield sophisticated artificial intelligence tools to create convincing fake identities, manipulate emotions, and steal millions from unsuspecting victims across social media platforms. While romance scams grab headlines for their devastating emotional and financial impact, they represent just one facet of a much larger and more complex fraud ecosystem.

The numbers tell a sobering story. In 2024 alone, romance scams cost victims over £106 million according to the City of London Police, a specialized unit that investigates financial crime in London’s business district. Individual victims lost an average of £11,222 each, while Barclays, one of the UK’s largest banks, reported a 20% year-on-year increase in these schemes. However, romance fraud serves as merely the visible tip of an iceberg that encompasses investment scams, identity theft, and social engineering attacks—all increasingly powered by AI technology that makes deception easier and more convincing than ever before.

The AI-powered evolution of deception

Romance scammers have dramatically upgraded their playbook, now deploying AI-generated deepfakes to create hyper-realistic videos and audio recordings that can fool even cautious victims. Deepfakes—AI-created media that convincingly replaces one person’s likeness with another’s—allow fraudsters to impersonate celebrities, trusted figures, or even create entirely fictional personas with remarkable authenticity.

This technology reached a shocking milestone earlier this year when a French woman lost €830,000 to scammers who used AI-generated images and videos to impersonate actor Brad Pitt. The criminals didn’t simply send fake photos; they created dynamic video content that appeared to show the celebrity speaking directly to their victim, building trust through seemingly personal interactions.

These AI-enhanced romance scams typically follow a calculated “slow-burn” approach. Fraudsters invest weeks or months nurturing online relationships, sharing AI-generated photos and videos to build emotional connections before making financial requests. When they finally ask for money, the requests seem urgent and legitimate—a medical emergency, an investment opportunity, or a family crisis that requires immediate assistance.

The sophistication of these operations has transformed romance scams from opportunistic schemes into professional fraud enterprises that exploit both technology and human psychology with devastating effectiveness.

Beyond romance: The broader fraud landscape

The same AI tools revolutionizing romance scams are driving fraud across multiple categories, creating what security experts call “synthetic identity” schemes. Unlike traditional identity theft, which involves stealing someone’s existing personal information, synthetic identity fraud combines real, stolen, and completely fabricated details to create convincing but entirely fake personas.

These synthetic identities serve multiple criminal purposes. Fraudsters use them to secure loans with no intention of repayment, facilitate money laundering operations, and execute social engineering attacks where they impersonate trusted organizations or individuals to extract sensitive information or money transfers. The artificial personas appear legitimate enough to pass basic verification checks while remaining untraceable to real individuals.

Investment scams represent another rapidly growing category, with losses rising more than a third in 2024 to reach £144.4 million according to Hargreaves Lansdown, a major UK investment platform. Criminals increasingly use deepfake videos of celebrities and financial experts to promote fraudulent investment opportunities, particularly in cryptocurrency and high-return schemes.

A particularly sophisticated example emerged recently when authorities uncovered an organized network based in Georgia that defrauded thousands of victims across the UK, Europe, and Canada out of $35 million. The operation used deepfake videos and fabricated news reports featuring Martin Lewis, a well-known British financial journalist and consumer advocate, to promote bogus cryptocurrency investments. Victims believed they were following advice from a trusted financial expert, when in reality they were watching AI-generated content created by criminals.

The technology gap in fraud prevention

The proliferation of these scams stems largely from how accessible fraud tools have become. Three-quarters of all scams now originate online, whether through dating sites, social media platforms, or other digital channels. Pre-built fraud kits available on dark web marketplaces have eliminated traditional barriers to entry—anyone with internet access and criminal intent can now purchase ready-made scam templates and AI tools to create convincing fake identities.

Meanwhile, powerful detection technologies exist but remain underutilized by major platforms. AI-powered digital footprint analysis and OSINT (Open Source Intelligence) tools can verify whether real people exist behind online accounts, going beyond surface-level profile verification to examine deeper patterns and connections.

These detection systems can instantly cross-reference whether a user’s email address or phone number matches their provided name, flag suspicious geographic inconsistencies, and identify AI-generated images or celebrity photos used as profile pictures. They can also detect telltale signs of fraud operations, such as disposable phone numbers, newly created email addresses, or accounts that lack authentic digital histories.

The technology works by analyzing vast amounts of publicly available information—social media posts, public records, digital breadcrumbs—to build comprehensive profiles that distinguish genuine users from synthetic identities. When someone creates a legitimate social media account, their digital footprint typically shows consistent patterns across multiple platforms and timeframes. Synthetic identities, by contrast, often exhibit suspicious gaps, inconsistencies, or signs of artificial generation.

The platform accountability problem

Despite having access to sophisticated fraud detection capabilities, major technology companies have not prioritized implementing comprehensive anti-fraud measures. The same platforms that can deliver precisely targeted advertisements based on detailed user profiling could theoretically detect and disrupt romance scams and other fraud operations with similar accuracy.

The reluctance to deploy these tools appears rooted in business considerations rather than technical limitations. Comprehensive identity verification and fraud detection systems require significant computational resources and could potentially slow user acquisition—metrics that directly impact platform growth and advertising revenue.

Some social media companies have introduced voluntary identity verification features and basic anti-scam warnings, but these measures remain optional and limited in scope. More robust solutions would require platforms to fundamentally change how they approach user verification and content monitoring, potentially creating friction that conflicts with their growth-focused business models.

The path forward

The current fraud epidemic represents a solvable problem with existing technology, but requires coordinated action from platform operators, regulators, and users. Digital platforms must move beyond reactive content moderation to proactive fraud prevention, implementing comprehensive identity verification and behavioral analysis systems.

For businesses and individuals, understanding these evolving fraud techniques becomes increasingly critical as criminals continue refining their approaches. The combination of AI-generated content and sophisticated social engineering creates threats that traditional cybersecurity awareness training may not adequately address.

The tools to combat AI-powered fraud exist today. What remains missing is the institutional will to deploy them at scale, prioritizing user safety over growth metrics and short-term profitability. Until major platforms make fraud prevention a core business priority rather than an optional feature, criminals will continue exploiting the trust and emotional vulnerabilities that make social media such a powerful communication tool.

Beyond romance fraud: The rising threat of social media scams

Recent News

Iowa teachers prepare for AI workforce with Google partnership

Local businesses race to implement AI before competitors figure it out too.

Fatalist attraction: AI doomers go even harder, abandon planning as catastrophic predictions intensify

Your hairdresser faces more regulation than AI companies building superintelligent systems.

Microsoft brings AI-powered Copilot to NFL sidelines for real-time coaching

Success could accelerate AI adoption across other major sports leagues and high-stakes environments.