OWASP officially released the 2025 version of the “OWASP Top 10 for LLM Applications” on Nov 18, 2024.
This guide helps organizations to identify and mitigate the top security risks in LLM applications, such as chatbots and RAG systems. OWASP published the original Top 10 for LLMs in 2023, and the 2025 version has been updated to cover the latest LLM risks and application architectures.
Background on the OWASP Top 10 for LLMs
OWASP is a non-profit foundation and community focused on software security, and is most well-known for the original “OWASP Top Ten”, an industry standard for web application security.
Similar to the original OWASP Top Ten, which identifies the top security risks for web developers, the Top 10 for LLMs identifies the top security risks for LLM application developers, such as prompt injection, sensitive information disclosure, supply chain risks, and more. This list was developed by hundreds of expert contributors over the past two years.
Note: Although the name of the list is often shortened to “OWASP Top 10 for LLMs”, it primarily targets LLM application developers, rather than LLM foundation model developers.
Improvements in the 2025 Version
The OWASP Top 10 for LLMs is updated every year, driven by the rapid evolution of LLM capabilities, use cases, and risks (compared to every 4 years for the OWASP Top Ten for web applications).
The 2025 OWASP Top 10 for LLMs contains several improvements compared to the previous “v1.1” version, described below:
Additions
The 2025 list includes three new vulnerabilities to reflect the current state-of-the-art applications and attacks:
- LLM07:2025 System Prompt Leakage
To mitigate the risk of leaking their system prompt in LLM responses, developers should refrain from including sensitive information in the system prompt, and also implement evaluations and guardrails as appropriate. - LLM08:2025 Vector and Embedding Weaknesses
Developers should integrate access control rules into RAG systems so LLM responses don’t leak information from documents that a user isn’t authorized to view. - LLM09:2025 Misinformation
LLM-generated text may not be factually accurate, so developers should utilize techniques such as RAG, and conduct evaluations using metrics such as factual consistency, to minimize the risk of misinformation.
Removals
Additionally, several vulnerabilities have been removed from the 2025 list:
- LLM07: Insecure Plugin Design is removed, but is partially referred to in LLM06:2025 Excessive Agency
- LLM09: Overreliance is removed, but is partially referred to in LLM09:2025 Misinformation
- LLM10: Model Theft is removed, but is partially referred to in LLM10:2025 Unbounded Consumption
Changes
Finally, the 2025 list also renames and expands the scope of certain items:
- LLM02: Insecure Output Handling is renamed to LLM05:2025 Improper Output Handling
- LLM03: Training Data Poisoning is expanded to LLM04:2025 Data and Model Poisoning
- LLM04: Denial of Service is expanded to LLM10:2025 Unbounded Consumption
Securing your LLM applications with Citadel AI
Citadel Lens is our software solution for organizations to evaluate, monitor, and improve the security and safety of their LLM applications.
For example, Lens automatically scans the security of LLM responses with an extensive library of built-in and custom metrics, covering risks such as LLM02:2025 Sensitive Information Disclosure, LLM07:2025 System Prompt Leakage, and LLM09:2025 Misinformation.
Additionally, as described in LLM01:2025 Prompt Injection, Lens can automatically simulate jailbreak attacks to test your LLM application against adversarial users.
Beyond just technical testing, Lens also provides Model Cards and Data Cards to help you to track the provenance of both internal and third-party models/datasets, helping you manage the risks described in LLM03:2025 Supply Chain.
Leading organizations utilize Citadel Lens to kickstart, scale, and align their AI security layer against industry standards such as the OWASP Top 10 for LLMs, EU AI Act, ISO 42001, and more. To see a demo of how Citadel Lens can secure your LLM applications, please contact us at any time.