×
AI coding tools create security gaps in 45% of generated code
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Forrester researcher Janet Worthington experimented with “vibe coding”—using AI tools like Cursor to generate applications through natural language prompts without examining the underlying code. While she successfully created a weather application in minutes, her security review revealed significant vulnerabilities including unsanitized inputs, missing rate limiting, and exposed API keys that weren’t addressed until explicitly prompted.

The security gap: AI coding tools don’t prioritize security by default, creating a dangerous blind spot for developers who assume generated code is production-ready.

  • Studies show 45% of AI-generated coding tasks contain security weaknesses, while open-source language models suggest non-existent packages over 20% of the time.
  • Commercial models perform better but still recommend fake packages 5% of the time, which attackers exploit by creating malicious packages with matching names.
  • Even when prompted, Cursor identified and fixed security issues retroactively rather than implementing secure coding practices from the start.

What’s happening in practice: Major tech companies are already integrating AI-generated code at scale, with Microsoft and Google reporting that over 25% of their code is now written by AI.

  • “Vibe coding” tools like Cursor, Cognition Windsurf, and Claude Code are becoming entrenched in professional software development workflows.
  • The rapid adoption is accelerating code deployment cycles but potentially increasing the volume of vulnerable code entering production systems.

The bigger picture: Worthington predicts a fundamental shift in software development within the next three to five years, where traditional development lifecycles will collapse and developers will evolve from programmers to “agent orchestrators.”

  • AI-native application generation platforms will integrate ideation, design, coding, testing, and deployment into single generative processes.
  • Low-code platforms will converge with vibe coding tools, enabling both technical and non-technical users to build applications rapidly.
  • AI security agents will emerge as essential tools to prevent “a tsunami of insecure, poor quality, and unmaintainable code.”

What they’re saying: Worthington echoes AI researcher Andrej Karpathy’s February assessment that vibe coding “is not too bad for throwaway weekend projects.”

  • “Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away,” Karpathy noted in his original post about the approach.
  • Worthington emphasizes that “DevSecOps best practices must be adopted for all code regardless of how it is developed—with AI or without AI, by full time developers, a 3rd party, or downloaded from open source projects.”

Why this matters: As organizations rush to adopt AI-powered development tools for competitive advantage, the security implications of AI-generated code remain largely unaddressed, potentially creating widespread vulnerabilities across enterprise applications.

Secure Vibe Coding: I’ve Done It Myself And It’s A Paradigm Not A Paradox

Recent News

Apple to add Google Gemini and other AI models to Siri

Cook's multi-partner strategy signals Apple's shift away from building everything in-house.

Nvidia eyes $1B investment in AI coding startup Poolside

Poolside's $12 billion valuation signals massive investor appetite for AI-powered development tools.

100+ students caught using AI to fake attendance and apologies

The professors displayed a mash-up of identical AI-generated apologies as a "life lesson."