AI-powered code generation is now embedded in mainstream software development, with tools like GitHub Copilot generating nearly half of developers’ code. However, IOActive’s April 2026 whitepaper, *The Security Gap in AI-Generated Code*, reveals a critical and systemic security shortfall: AI models frequently generate insecure code by default.
IOActive evaluated 27 leading AI models and AI-powered coding tools using 730 real-world programming prompts across 27 languages and 219 vulnerability categories. Prompts intentionally avoided mentioning security to reflect typical developer usage. Security outcomes were measured using 72 automated vulnerability detectors, producing nearly 20,000 analyzed code samples.
Key Findings:
• Average security performance across all models was just 59%.
• Nearly one-third (31.6%) of AI-generated code samples were fully exploitable.
• No model achieved 100% secure output; even the best configuration produced 90 vulnerabilities.
• Infrastructure and DevOps code (Dockerfiles, Terraform, CI/CD pipelines) was the most dangerous, exceeding 70–97% vulnerability rates.
• Authentication, rate limiting, and cryptography consistently failed across nearly all models.
Advanced “wrapper” tools and security-aware system prompts significantly improved results, boosting security by up to 25 percentage points. AI “temperature” settings also impacted security, with some models producing safer code at higher temperature settings. In contrast, simple prompts like ‘write secure code’ were often ineffective or counterproductive.
Memory-safe languages such as Rust and Go generated fewer vulnerabilities than Python or JavaScript, but still failed in cryptography and business logic. Dockerfiles were the single worst-performing language, with almost universal failure.
The primary takeaway from IOActive’s research is that AI-generated code is not secure by default. This means that organizations using AI for software development must treat AI output as untrusted input—especially for infrastructure, authentication, and cryptography—and enforce mandatory security review before deployment. While AI can accelerate development, without proper controls it currently introduces substantial, measurable security risk.
