top of page

What’s new in OWASP’s 2025 GenAI/LLM Top 10 and why it matters now

The 2025 edition of OWASP’s GenAI/LLM Top 10 shifts the focus from “prompt tricks” to day-to-day realities of how teams actually ship GenAI: retrieval (RAG) pipelines, agent tooling, and usage that can spike costs or leak internals. Three notable additions System Prompt Leakage (LLM07), Vector & Embedding Weaknesses (LLM08), and Unbounded Consumption (LLM10), reflect incidents builders are seeing in production. And Misinformation (LLM09) is now called out explicitly as its own security concern.


What’s new in 2025 (vs. 2023/24)


Added:


  • LLM07 System Prompt Leakage recognizes that prompts often contain secrets or governance rules and can be extracted or inferred, OWASP stresses prompts shouldn’t hold credentials or authorization logic.

  • LLM08 Vector & Embedding Weaknesses covers RAG-specific issues like embedding inversion, poisoned corpora, and multi-tenant vector store leakage.

  • LLM09 Misinformation treats hallucination/false authority as a security exposure with operational and legal consequences.

  • LLM10 Unbounded Consumption expands “DoS” into a broader class including denial-of-wallet and model extraction via excessive queries.


Renamed/reshaped:



Retired or absorbed:


  • Insecure Plugin Design, Overreliance, Model Theft from 2023/24 do not appear as standalone items in 2025; “overreliance” is addressed under Misinformation, while model theft and DoS threats are covered inside Unbounded Consumption.


Why this matters: These changes mirror how GenAI is deployed today, RAG is everywhere, agents have real privileges, and usage is metered. The list now maps better to the actual failure modes teams are experiencing. (OWASP marks this as the current, official Top 10 for 2025; several third-party summaries also describe this as the “v2.0” update.)


Why it’s helpful now


  • RAG became the default pattern. The new vector/embedding entry acknowledges attacks we keep seeing: poisoned documents, cross-tenant leaks in shared vector stores, and embedding inversion that can recover original text. If you use RAG, you have a new, named risk with concrete mitigations.

  • Costs and extraction are part of the threat model. “Unbounded Consumption” explicitly ties runaway tokens to financial and IP harm (DoW, model cloning), not just downtime. That’s a better fit for cloud-metered LLMs.

  • Governance belongs in code, not prompts. OWASP is blunt: prompts will leak; don’t put secrets or access rules there. That guidance clarifies a common anti-pattern.

  • Integrity is a security concern. Treating misinformation as a Top 10 risk elevates source-grounding, verification, and human oversight from “nice to have” to required controls.


NIST’s Generative AI Profile (AI 600-1) also landed, giving leaders an official set of suggested actions mapped to the Govern/Map/Measure/Manage functions, useful for policy sign-off and audits while you ship controls. NIST Publications


Further reading


The

Cyber

Institute

Learning globally, serving locally.

Account Center | Contact Us | Privacy Policy

guidestar.png

Give with confidence. The Cyber Institute is a registered 501(c)(3) nonprofit organization.

© 2025 Cyber Institute. All Rights Reserved.

  • Instagram
  • Facebook
  • LinkedIn
  • TikTok
Infragard-25thLogo-WEB-03.png
1.png
bottom of page