What’s new in OWASP’s 2025 GenAI/LLM Top 10 and why it matters now
- Ananta Garnapudi
- 58 minutes ago
- 2 min read
The 2025 edition of OWASP’s GenAI/LLM Top 10 shifts the focus from “prompt tricks” to day-to-day realities of how teams actually ship GenAI: retrieval (RAG) pipelines, agent tooling, and usage that can spike costs or leak internals. Three notable additions System Prompt Leakage (LLM07), Vector & Embedding Weaknesses (LLM08), and Unbounded Consumption (LLM10), reflect incidents builders are seeing in production. And Misinformation (LLM09) is now called out explicitly as its own security concern.
What’s new in 2025 (vs. 2023/24)
Added:
LLM07 System Prompt Leakage recognizes that prompts often contain secrets or governance rules and can be extracted or inferred, OWASP stresses prompts shouldn’t hold credentials or authorization logic.
LLM08 Vector & Embedding Weaknesses covers RAG-specific issues like embedding inversion, poisoned corpora, and multi-tenant vector store leakage.
LLM09 Misinformation treats hallucination/false authority as a security exposure with operational and legal consequences.
LLM10 Unbounded Consumption expands “DoS” into a broader class including denial-of-wallet and model extraction via excessive queries.
Renamed/reshaped:
Training Data Poisoning to Data & Model Poisoning (LLM04) broader than just training data, includes fine-tuning/embedding streams.
Insecure Output Handling to Improper Output Handling (LLM05) same idea treat LLM output as untrusted or you invite XSS/SSRF/RCE.
Retired or absorbed:
Insecure Plugin Design, Overreliance, Model Theft from 2023/24 do not appear as standalone items in 2025; “overreliance” is addressed under Misinformation, while model theft and DoS threats are covered inside Unbounded Consumption.
Why this matters: These changes mirror how GenAI is deployed today, RAG is everywhere, agents have real privileges, and usage is metered. The list now maps better to the actual failure modes teams are experiencing. (OWASP marks this as the current, official Top 10 for 2025; several third-party summaries also describe this as the “v2.0” update.)
Why it’s helpful now
RAG became the default pattern. The new vector/embedding entry acknowledges attacks we keep seeing: poisoned documents, cross-tenant leaks in shared vector stores, and embedding inversion that can recover original text. If you use RAG, you have a new, named risk with concrete mitigations.
Costs and extraction are part of the threat model. “Unbounded Consumption” explicitly ties runaway tokens to financial and IP harm (DoW, model cloning), not just downtime. That’s a better fit for cloud-metered LLMs.
Governance belongs in code, not prompts. OWASP is blunt: prompts will leak; don’t put secrets or access rules there. That guidance clarifies a common anti-pattern.
Integrity is a security concern. Treating misinformation as a Top 10 risk elevates source-grounding, verification, and human oversight from “nice to have” to required controls.
NIST’s Generative AI Profile (AI 600-1) also landed, giving leaders an official set of suggested actions mapped to the Govern/Map/Measure/Manage functions, useful for policy sign-off and audits while you ship controls. NIST Publications
Further reading
OWASP GenAI Incident Response Guide 1.0: Incident scenarios including prompt leakage, vector poisoning, and denial-of-wallet: https://genai.owasp.org/resource/genai-incident-response-guide-1-0/
CISA Best Practices for Securing Data Used to Train & Operate AI Systems (PDF): Provenance, signing, and data-supply-chain integrity: https://media.defense.gov/2025/May/22/2003720601/-1/-1/0/CSI_AI_DATA_SECURITY.PDF
NIST AI 600-1 Generative AI Profile: Governance structure mapped to Govern/Map/Measure/Manage: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
OWASP Top 10 for LLM Applications (2025 PDF): Consolidated reference document: https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-v2025.pdf