×
Straining to keep up? AI safety teams lag behind rapid tech advancements
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Major AI companies like OpenAI and Google have significantly reduced their safety testing protocols despite developing increasingly powerful models, raising serious concerns about the industry’s commitment to security. This shift away from rigorous safety evaluation comes as competitive pressures intensify in the AI industry, with companies seemingly prioritizing market advantage over comprehensive risk assessment—a concerning development as these systems become more capable and potentially consequential.

The big picture: OpenAI has dramatically shortened its safety testing timeframe from months to days before releasing new models, while simultaneously dropping assessments for mass manipulation and disinformation risks.

  • Financial Times reports that testers of OpenAI’s o3 model were given only days to evaluate systems that previously would have undergone months of safety testing.
  • One tester told the Financial Times: “We had more thorough safety testing when [the technology] was less important.”

Industry pattern: OpenAI’s safety shortcuts appear to be part of a broader industry trend, with other major AI developers following similar paths.

  • Neither Google’s new Gemini Pro 2.5 nor Meta’s new Llama 4 models were released with comprehensive safety details in their technical reports and evaluations.
  • These developments represent a significant regression in safety protocols despite the increasing capabilities of AI systems.

Why it’s happening: Fortune journalist Jeremy Kahn attributes this industry-wide shift to intense market competition, with companies viewing thorough safety testing as a competitive disadvantage.

  • “The reason… is clear: Competition between AI companies is intense and those companies perceive safety testing as an impediment to speeding new models to market,” Kahn wrote.

What else they’re covering: The newsletter mentions several other initiatives including a “Worldbuilding Hopeful Futures with AI” course, a Digital Media Accelerator program accepting applications, and various new AI publications.

Future of Life Institute Newsletter: Where are the safety teams?

Recent News

OpenAI partners with AMD for AI chips, gets option to buy 10% stake

AMD's stock surged 25% as investors bet on the AI infrastructure arms race.

Serve Robotics deploys 1,000 delivery robots with 2,000 more planned by 2025

Sidewalk robots complete hundreds of thousands of deliveries for Uber Eats and 7-Eleven.

Google offers $20K bounty for serious Gemini AI security flaws

Google focuses on real security threats, not just AI chatbots saying silly things.