April 13, 2026 Security 14 min read

AI-Generated Code Security Vulnerabilities: The Hidden Risks in Every Vibe-Coded App

45% of AI-generated code introduces at least one OWASP Top 10 vulnerability — and that number has not improved since 2025 despite model updates. The most dangerous flaws are not syntax errors that compilers catch. They are logical gaps: missing authorization checks, hardcoded credentials, unvalidated inputs, and server-side request forgeries that appear in perfectly working, syntactically correct code. This article maps the 8 most critical AI-generated code vulnerability classes to specific coding patterns, explains exactly why AI produces them, and shows how to detect them before they reach production.

Key Takeaways

Why AI Code Has a Specific Vulnerability Fingerprint

Human developers make security mistakes too. But AI-generated code vulnerabilities follow predictable, repeatable patterns that differ from human mistakes in important ways. Understanding why they occur is the first step to finding them systematically.

Large language models generate code by predicting the most statistically likely next token based on training data. They do not reason about threat models. They do not know whether the data they are fetching belongs to the requesting user. They do not know that a URL passed in a query parameter should not be fetched server-side without validation. They know what code that performs a given task looks like, and they produce it.

Why AI skips security checks: When a prompt says "build an endpoint that returns the user's orders by ID," the model generates code that fetches orders by ID. The prompt never mentioned authorization, so the model never adds it. It is not a bug in the model — it is a gap between what was asked and what secure code actually requires.

Training data compounds the problem. Public GitHub repositories — the primary corpus for most code models — contain enormous amounts of insecure code: tutorials that skip auth, Stack Overflow answers that demonstrate functionality without security hardening, legacy codebases with hardcoded credentials. The model learned from all of it. The common vulnerability types map directly to the CWE catalog: CWE-862 (missing authorization), CWE-798 (hardcoded credentials), CWE-89 (SQL injection), CWE-79 (cross-site scripting), and CWE-200 (exposure of sensitive information).

Research consensus (2025–2026): AI-generated code has a 2.7x higher vulnerability density compared to human-written code. CVSS 7.0+ vulnerabilities (high severity) appear 2.5x more often in AI-generated repositories than in equivalent human-authored code. Source: multiple studies aggregated by CSA Lab, 2026.

The scale of real-world impact is growing. 25% of startups in Y Combinator's Winter 2025 cohort reported codebases that were 95% AI-generated. Escape.tech found 58% of scanned AI applications had at least one critical vulnerability. And the Moltbook incident proved these risks are not theoretical: on January 28, 2026, a social network built entirely with AI coding tools ("didn't write a single line of code") launched — and within three days, security researchers at Wiz discovered it had exposed its entire production database, including 1.5 million API authentication tokens, 35,000 email addresses, and private messages.

The result is a specific vulnerability fingerprint. Syntax is clean. Logic runs. Tests pass. But authorization is missing, inputs are not sanitized, secrets are inline, and external requests are unvalidated. These vulnerabilities are exactly the ones that automated scanners struggle with and that attackers specifically hunt for.

The 8 Vulnerability Classes Most Common in AI-Generated Code

1. IDOR — Insecure Direct Object Reference (OWASP A01: Broken Access Control)

IDOR definition: A vulnerability where an application exposes a reference to an internal object (database ID, filename, account number) and does not verify that the requesting user is authorized to access that specific object.

IDOR is the most dangerous and most common AI-generated vulnerability class because it is a logical omission, not a syntactic error. AI models generate the data-fetching code correctly; they simply omit the ownership check that a security-aware developer would add instinctively.

A typical AI-generated endpoint looks like this:

# AI-generated: fetches order by ID — no authorization check
@app.get("/orders/{order_id}")
def get_order(order_id: int, current_user: User = Depends(get_current_user)):
    order = db.query(Order).filter(Order.id == order_id).first()
    if not order:
        raise HTTPException(status_code=404)
    return order  # returns ANY user's order to ANY authenticated user

The fix is one line — but the model never generates it unless explicitly asked:

# Correct: verify ownership before returning
    if order.user_id != current_user.id:
        raise HTTPException(status_code=403, detail="Forbidden")

Why AI generates it: The prompt asked for "get order by ID." The model fulfilled the prompt. Authorization is a security context requirement that must be explicitly stated in the prompt or caught in review. Semgrep's testing found Claude Code achieved only a 22% true positive rate detecting IDOR — meaning 78% of IDOR bugs in AI-generated code slip through automated scanning.

2. SQL Injection via String Concatenation (OWASP A03: Injection)

SQL injection via string concatenation is the textbook AI vulnerability. Models trained on older tutorials, legacy codebases, and quick Stack Overflow answers frequently generate queries built from f-strings or concatenation instead of parameterized queries.

# AI-generated: vulnerable to SQL injection
def search_users(username: str):
    query = f"SELECT * FROM users WHERE username = '{username}'"
    return db.execute(query)

# An attacker passes: ' OR '1'='1
# Query becomes: SELECT * FROM users WHERE username = '' OR '1'='1'
# Result: returns all users in the database
# Correct: parameterized query
def search_users(username: str):
    query = "SELECT * FROM users WHERE username = ?"
    return db.execute(query, (username,))

Despite being the most well-known vulnerability class in existence, Veracode found only an 80% security pass rate for SQLi across AI-generated code — meaning 1 in 5 AI-generated database interactions is potentially injectable. For Java specifically, the failure rate exceeded 70%. Log injection fared even worse — 88% of AI-generated samples failed to defend against it.

3. Hardcoded Secrets and Credentials (OWASP A02: Cryptographic Failures)

Hardcoded API keys, database passwords, JWT signing secrets, and service credentials are endemic in AI-generated code. Models generate working examples that include credentials inline, and developers ship them without noticing.

# AI-generated: credentials hardcoded directly in source
const supabaseClient = createClient(
  "https://xyzcompany.supabase.co",
  "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoic2VydmljZV9yb2xl..."  // service role key — full admin access
);

# AI-generated Python
OPENAI_API_KEY = "sk-proj-abc123..."  # hardcoded in source file
Scale of the problem: GitGuardian's State of Secrets Sprawl 2026 report documented 28.65 million new hardcoded secrets in public GitHub commits during 2025 — a 34% year-over-year increase, the largest single-year jump ever recorded. AI-assisted commits showed a 3.2% secret leak rate vs. a 1.5% baseline. 82% of exposed secrets remain active even after detection.

Wiz Research found vibe-coded applications on platforms like Lovable commonly exposed OpenAI API keys and Supabase service role keys hardcoded in client-side JavaScript — readable by any browser visitor, not just authenticated users. A single misconfigured Supabase instance exposed 1.5 million API keys and 35,000 user email addresses.

4. Missing Authentication and Authorization Checks (OWASP A01: Broken Access Control)

Distinct from IDOR, this category covers entire endpoints that AI generates without any authentication gate. The model generates the handler, connects it to the router, and assumes authentication is handled elsewhere — without verifying that assumption.

# AI-generated admin endpoint — no auth middleware attached
@router.delete("/admin/users/{user_id}")
async def delete_user(user_id: int):
    db.query(User).filter(User.id == user_id).delete()
    db.commit()
    return {"deleted": user_id}
# Any anonymous HTTP request to /admin/users/1 deletes user 1

Research from the Cloud Security Alliance found privilege escalation paths increased by 322% and architectural design vulnerabilities by 153% in repositories with significant AI contributions. The model generates the functionality; the authorization layer is a separate concern that needs explicit attention.

5. Server-Side Request Forgery — SSRF (OWASP A10: Server-Side Request Forgery)

SSRF definition: A vulnerability where an attacker can cause the server to make HTTP requests to an arbitrary destination — including internal services, cloud metadata endpoints (AWS 169.254.169.254), and internal databases — by controlling a URL parameter.

SSRF is documented as the single most frequent confirmed finding in AI-generated backend code by some researchers. AI models generate "fetch this URL" functionality without any validation of where that URL points.

# AI-generated: fetches user-supplied URL server-side — SSRF
@app.post("/preview")
async def fetch_preview(url: str):
    response = requests.get(url)  # attacker sends: http://169.254.169.254/latest/meta-data/
    return {"content": response.text}
# On AWS: exposes IAM credentials, instance metadata, internal endpoints
# Correct: validate destination before fetching
from urllib.parse import urlparse

ALLOWED_SCHEMES = {"http", "https"}
BLOCKED_HOSTS = {"169.254.169.254", "localhost", "127.0.0.1", "0.0.0.0"}

def validate_url(url: str) -> bool:
    parsed = urlparse(url)
    return parsed.scheme in ALLOWED_SCHEMES and parsed.hostname not in BLOCKED_HOSTS

Why AI generates it: URL-fetching functionality is straightforward. The prompt says "fetch the URL and return the content." Validating that the URL doesn't point at the server's own internal network is a threat-modeling concern that never appears in standard code examples.

6. Path Traversal (OWASP A01: Broken Access Control)

AI-generated file handling code routinely constructs file paths from user input without sanitizing directory traversal sequences. An attacker passes ../../etc/passwd where a filename is expected and reads arbitrary files on the server.

# AI-generated: vulnerable file download endpoint
@app.get("/files/{filename}")
def download_file(filename: str):
    file_path = f"/app/uploads/{filename}"
    return FileResponse(file_path)
# Attacker requests: /files/../../etc/passwd
# Resolves to: /etc/passwd — full server file read
# Correct: resolve and validate path stays within allowed directory
import os

UPLOAD_DIR = "/app/uploads"

def safe_path(filename: str) -> str:
    safe = os.path.realpath(os.path.join(UPLOAD_DIR, filename))
    if not safe.startswith(UPLOAD_DIR + os.sep):
        raise ValueError("Path traversal detected")
    return safe

Two confirmed CVEs directly linked to AI-generated code in early 2026 were path traversal vulnerabilities: CVE-2025-55526 (CVSS 9.1, directory traversal in n8n-workflows) and a path restriction bypass in Anthropic's own Filesystem MCP Server.

7. Cross-Site Scripting — XSS (OWASP A03: Injection)

AI-generated frontend code and template rendering frequently reflects user input without escaping. The model generates the display logic; HTML encoding is an extra step that only appears if security is explicitly requested.

// AI-generated React component: dangerouslySetInnerHTML without sanitization
function UserProfile({ bio }) {
  return (
    <div dangerouslySetInnerHTML={{ __html: bio }} />
    // If bio = <script>document.cookie</script> — XSS executed
  );
}

// AI-generated Jinja2 template: disables autoescaping
{{ user_comment | safe }}  {# marks content as safe without sanitizing it #}
XSS in AI code: AI-generated code is 2.74x more likely to introduce XSS vulnerabilities than human-written code. The CSA Lab analysis found an 86% failure rate for CWE-80 (cross-site scripting) in AI-generated web code samples.

8. Slopsquatting — Hallucinated Dependencies (OWASP A06: Vulnerable and Outdated Components)

Slopsquatting: A supply chain attack where attackers register package names that AI models predictably hallucinate. When a developer runs npm install or pip install on AI-generated code, they install the attacker's malicious package instead of a legitimate one.

This is a uniquely AI-era vulnerability class. Models occasionally generate import statements or package.json entries for packages that do not exist. Approximately 20% of AI-generated code references non-existent packages, and 43% of hallucinated names are consistently reproduced across similar prompts — making them predictable enough to pre-register as malicious packages.

# AI-generated requirements.txt — "colorize-logs" does not exist on PyPI
colorize-logs==1.2.0
requests==2.31.0
fastapi==0.104.0

# An attacker registers "colorize-logs" on PyPI with malicious code
# pip install -r requirements.txt silently installs the attack payload

One documented case: a single malicious slopsquatted package accumulated over 30,000 downloads in three months before detection. Existing SCA scanners do not catch hallucinated packages — they check known-vulnerable packages, not non-existent ones.

Bonus: Prompt Injection in AI Coding Tools Themselves

The attack surface extends beyond the code AI generates — to the AI tools themselves. In 2026, CVE-2025-53773 (CVSS 9.6) revealed that hidden prompt injection in pull request descriptions enabled remote code execution with GitHub Copilot. An attacker could embed instructions in a PR description that Copilot would execute when reviewing the code. Similarly, Checkmarx documented how AI coding assistants can be manipulated through adversarial inputs in repository files, CLAUDE.md configurations, and package descriptions — turning the review tool itself into an attack vector.

We audit AI-generated code for OWASP Top 10 vulnerabilities

Real engineers review your vibe-coded app before it ships. IDOR, SQLi, hardcoded secrets, missing auth — we find what automated scanners miss.

Install Vibers App — Free

Vulnerability Reference Table

Vulnerability Class OWASP Ref AI Frequency SAST Detection Human Review Needed
IDOR / Missing ownership check A01: Broken Access Control Very High Low (22% TPR) Yes — logical
SQL Injection (string concat) A03: Injection High (20% failure) High Sometimes
Hardcoded Secrets A02: Cryptographic Failures Very High High (secret scanning) For context/scope
Missing Auth on Endpoints A01: Broken Access Control High Low Yes — architectural
SSRF A10: SSRF High Moderate Yes — context-dependent
Path Traversal A01: Broken Access Control Moderate–High Moderate (47% TPR) Yes — confirm fix
XSS A03: Injection High (2.74x more) Moderate For template logic
Slopsquatting A06: Vulnerable Components ~20% of codebases None (new category) Yes — verify all deps

Detection Tools and Their Limits

The instinct after reading the above list is to reach for a scanner. SAST (Static Application Security Testing) and SCA (Software Composition Analysis) tools are necessary — but they are not sufficient for AI-generated code.

What SAST tools catch well

What SAST tools miss in AI-generated code

"Running the exact same SAST prompt on the exact same codebase multiple times often yielded vastly different results — in one application, three identical runs produced 3, 6, and then 11 distinct findings." — Semgrep research on AI-assisted vulnerability detection, 2025

The non-determinism finding matters. SAST tools produce consistent output. AI-powered code review produces inconsistent output. For security-critical analysis, inconsistency is a disqualifying property. A vulnerability that was not reported on the third run is still a vulnerability.

Detection gap: Only 9% of organizations consider AI-driven AppSec analysis essential, yet 85% use AI coding assistants. 38% use AI to support code review in pull requests — leaving 62% of organizations merging AI-generated code without any automated security feedback at the merge point. Source: Kusari / CSA, 2026.

What Effective Security Review Looks Like for AI-Generated Code

Given the detection gaps above, security review for AI-generated code needs to be structured differently from a standard code review. The following approach reflects what experienced security engineers apply when auditing vibe-coded applications.

Step 1: Map all trust boundaries first

Before reading any code, draw the data flow: where does user-controlled input enter the application? What can it affect? Where is it reflected back to other users or used in system operations? AI-generated apps frequently lack this mental model entirely — the code was generated feature by feature without an overall security design.

Step 2: Audit every route for authorization

For every HTTP endpoint, ask: what happens if an unauthenticated request hits this route? What happens if an authenticated user hits it with a different user's resource ID? This is the IDOR and broken access control audit — it cannot be automated away.

Step 3: Grep for dangerous patterns

Specific patterns that AI generates frequently and that deserve manual review:

# Find string-concatenated queries
grep -rn "f\"SELECT\|f'SELECT\|\"SELECT.*+\|'SELECT.*+" src/

# Find hardcoded secrets (supplement with dedicated secret scanners)
grep -rn "api_key\s*=\s*[\"']\|password\s*=\s*[\"']\|secret\s*=\s*[\"']" src/

# Find unvalidated URL fetches
grep -rn "requests.get(\|fetch(\|urllib.request" src/

# Find dangerouslySetInnerHTML in React
grep -rn "dangerouslySetInnerHTML" src/

# Find file path construction from user input
grep -rn "os.path.join.*request\|open.*request\|FileResponse.*request" src/

Step 4: Verify every dependency

Run npm audit, pip-audit, or your SCA tool of choice. Beyond known-vulnerable packages, verify that every dependency in package.json or requirements.txt actually exists on the registry. For slopsquatting defense, cross-reference against the registry directly before installation.

Step 5: Test authorization boundaries

Create two test accounts. Log in as user A, create resources, capture their IDs. Log in as user B. Attempt to access, modify, and delete user A's resources using their IDs directly. Every endpoint that returns data should fail this test cleanly with a 403. IDOR vulnerabilities are invisible in code review and trivial to find through this test.

The Case for Human Review in the AI Coding Era

The data presents a clear picture: AI dramatically accelerates code production, and it equally accelerates the introduction of logical security vulnerabilities that automated tools cannot reliably catch. The SusVibes benchmark result — 61% functionally correct, 10.5% both correct and secure — is not an indictment of the tools. It is a description of what they optimize for.

Automated scanners are necessary and should be in every pipeline. They catch the syntactic and pattern-based vulnerabilities efficiently. But the most dangerous AI-generated vulnerabilities — missing authorization checks, IDOR, SSRF, business logic flaws — require a human who understands the threat model, the data ownership model, and what the application is supposed to prevent.

The consequence of skipping human review is not hypothetical. Georgia Tech confirmed 74 CVEs directly from AI-generated code through May 2026. Wiz found 2,038 critical vulnerabilities across 1,400 production vibe-coded apps. Escape.tech found 58% of scanned AI apps had at least one critical vulnerability. These are production systems with real users.

For more context on the overall security risk landscape for vibe-coded projects, see our article on vibe coding security risks. For a structured approach to pre-launch review, see how to review a vibe-coded app before launch. For a comparison of automated vs. human review tools, see CodeRabbit alternative: human review.

We audit AI-generated code for OWASP Top 10 vulnerabilities

Real engineers review your vibe-coded app before it ships. We check IDOR, SQLi, hardcoded secrets, missing auth, SSRF, path traversal — the exact vulnerability classes that AI produces and scanners miss. One install, one review request via PR.

Get Your Code Reviewed

Frequently Asked Questions

What percentage of AI-generated code contains security vulnerabilities?
Multiple independent studies converge around 45–62%. Veracode tested 100+ LLMs and found 45% of AI-generated code samples introduced OWASP Top 10 vulnerabilities. Endor Labs and the Cloud Security Alliance found 62% of AI-generated code contained design flaws or known security vulnerabilities. The SusVibes benchmark (arXiv:2512.03262) found that while 61% of solutions were functionally correct, only 10.5% were both correct and secure.
Why does AI-generated code have more security vulnerabilities than human-written code?
AI models are trained on massive codebases that include insecure patterns, and they optimize for code that runs and looks correct — not code that is resilient under adversarial conditions. They lack understanding of trust boundaries, threat models, and business context. When a prompt does not mention security requirements, the model defaults to the most common pattern in its training data, which often skips authorization checks, uses string concatenation for queries, and hardcodes credentials for simplicity.
What is the most common vulnerability in vibe-coded apps?
Based on 2025–2026 research, Broken Access Control (OWASP A01) — including IDOR — is the most frequently cited category. SSRF is documented as the single most frequent confirmed finding in AI-generated code by some researchers. Hardcoded secrets and credentials are a close second, with GitGuardian documenting a 34% year-over-year increase strongly correlated with AI-generated commits.
Can SAST tools catch AI-generated code vulnerabilities?
Partially. SAST tools are effective at catching well-known patterns like SQL injection via string concatenation, hardcoded secrets, and use of deprecated cryptographic algorithms. However, they struggle with logical vulnerabilities like missing authorization checks (IDOR), business logic flaws, and context-dependent issues like SSRF. Semgrep found Claude Code's false positive rate for SQL injection detection reached 95% — meaning SAST still needs human judgment to triage results effectively.
How do I detect IDOR vulnerabilities in AI-generated code?
IDOR vulnerabilities are best caught through manual code review and penetration testing, not automated tools. Review every route or controller that accepts an ID parameter and verify there is an authorization check confirming the requesting user owns that resource. Then test it: create two accounts, capture resource IDs from one, attempt to access them from the other. AI models routinely generate the data-fetching logic but omit the ownership check because the prompt only asked to "get the record by ID."
What are slopsquatting attacks in AI-generated code?
Slopsquatting is a supply chain attack where malicious actors register package names that AI models hallucinate. Around 20% of AI-generated code references packages that do not exist on npm or PyPI. Attackers register these hallucinated names as real but malicious packages. Developers running npm install or pip install on AI-generated requirements files then unknowingly install the attacker's code. Approximately 43% of hallucinated package names are consistently reproduced across similar prompts, making them predictable targets.

Vibers Security Team

Vibers provides human-in-the-loop code review for AI-generated projects. Our reviewers audit vibe-coded apps against OWASP Top 10, check authorization logic, verify dependency integrity, and find the logical vulnerabilities that automated scanners miss. Learn more about Vibers.

Related Articles