Understanding the OWASP Top 10 for LLMs

LLMs pose new and unique security risks

AppSOC Editorial Team

June 28, 2024

Understanding the OWASP Top 10 for LLMs

Subscribe to AppSoc

Get the best, coolest, and latest in design and code delivered to your inbox each week.

The Open Web Application Security Project (OWASP) has long been a trusted authority in identifying critical security risks for web applications through its original OWASP Top 10 list. This list has been instrumental in highlighting the most common and severe threats to web security, guiding developers and organizations in strengthening their defenses. With the rise of Large Language Models (LLMs) such as GPT-4, new and unique security challenges have emerged, prompting OWASP to develop a specialized Top 10 list specifically for these advanced AI systems. The OWASP Top 10 for LLMs focuses on the unique vulnerabilities and threats posed by these models, ensuring that organizations can adequately protect and manage their AI-driven applications.

Following summary of each of these risks and the recommended mitigation steps to ensure the security and integrity of LLMs.

LLM01: Prompt Injection

Definition: Manipulating LLMs via crafted inputs can lead to unauthorized access, data breaches, and compromised decision-making.

Explanation: Prompt injection attacks involve carefully crafted inputs that manipulate the LLM to perform unintended actions or expose sensitive data. For example, an attacker could input a prompt that tricks the model into revealing confidential information or executing malicious code. These attacks exploit the model's reliance on input data, highlighting the need for robust input validation mechanisms.

Mitigation: Implement strict input validation and sanitization techniques to filter out potentially malicious inputs. Use context-aware input processing to detect and neutralize injection attempts. Regularly audit and update input handling mechanisms to stay ahead of emerging threats.

LLM02: Insecure Output Handling

Description: Neglecting to validate LLM outputs may lead to downstream security exploits, including code execution that compromises systems and exposes data.

Explanation: Insecure output handling occurs when the outputs generated by an LLM are not properly validated or sanitized before being used in other systems. This can lead to injection attacks, where malicious content generated by the LLM is executed by downstream systems, potentially causing data breaches or system compromises. Proper output handling is critical to prevent these cascading security issues.

Mitigation: Validate and sanitize all outputs before passing them downstream. Implement security policies that restrict the execution of code derived from LLM outputs. Use output encoding techniques to prevent the injection of malicious content into systems that process these outputs.

LLM03: Training Data Poisoning

Description: Tampered training data can impair LLM models, leading to responses that may compromise security, accuracy, or ethical behavior.

Explanation: Training data poisoning involves introducing malicious or biased data into the training set, causing the LLM to learn incorrect or harmful patterns. This can result in models that produce biased, inaccurate, or unethical outputs. Poisoned training data can undermine the trustworthiness of the model and compromise its utility in critical applications.

Mitigation: Employ robust data validation and cleaning processes to ensure the integrity of training data. Use anomaly detection systems to identify and filter out suspicious data. Maintain a secure and transparent data pipeline to prevent unauthorized modifications.

LLM04: Model Denial of Service

Description: Overloading LLMs with resource-heavy operations can cause service disruptions and increased costs.

Explanation: Model Denial of Service (DoS) attacks aim to overwhelm the computational resources of an LLM, rendering it unavailable to legitimate users. These attacks can be costly due to the high resource consumption and can disrupt critical services that rely on the model's availability. Preventing such attacks involves managing resource usage and ensuring the model can handle unexpected loads.

Mitigation: Implement rate limiting and resource management policies to prevent abuse. Use load balancing and autoscaling to manage resource allocation efficiently. Monitor system performance and set up alerts for unusual spikes in resource usage.

LLM05: Supply Chain Vulnerabilities

Description: Depending on compromised components, services, or datasets can undermine system integrity, causing data breaches and system failures.

Explanation: Supply chain vulnerabilities arise when LLMs depend on third-party components, services, or datasets that may be compromised. These dependencies can introduce vulnerabilities that propagate through the system, leading to data breaches or operational failures. Ensuring the integrity of all components and services in the supply chain is critical to maintaining the security of LLMs.

Mitigation: Conduct thorough security assessments of all third-party components and services. Regularly update and patch all dependencies. Use secure channels for data acquisition and verify the integrity of datasets before use.

LLM06: Sensitive Information Disclosure

Description: Failure to protect against disclosure of sensitive information in LLM outputs can result in legal consequences or a loss of competitive advantage.

Explanation: Sensitive information disclosure occurs when an LLM inadvertently includes confidential or personal data in its outputs. This can happen if the training data contains such information, and the model learns to reproduce it. Protecting against this risk is essential to avoid legal liabilities and protect sensitive data.

Mitigation: Implement differential privacy techniques and data anonymization during the training phase. Conduct regular audits of model outputs to identify and remove instances of sensitive information. Use output filters to detect and scrub sensitive information before delivery.

LLM07: Insecure Plugin Design

Description: LLM plugins processing untrusted inputs and having insufficient access control risk severe exploits like remote code execution.

Explanation: Insecure plugin design refers to vulnerabilities in plugins that extend the functionality of LLMs. If these plugins process untrusted inputs without proper validation or lack sufficient access controls, they can become entry points for attacks, such as remote code execution. Ensuring the security of plugins is vital to maintaining the overall security of the LLM system.

Mitigation: Design plugins with secure coding practices, including input validation and access control mechanisms. Conduct regular security audits and penetration testing on plugins. Use sandboxing techniques to isolate plugin execution from critical system resources.

LLM08: Excessive Agency

Description: Granting LLMs unchecked autonomy to take action can lead to unintended consequences, jeopardizing reliability, privacy, and trust.

Explanation: Excessive agency occurs when LLMs are given too much autonomy to make decisions or take actions without sufficient oversight. This can lead to unintended and potentially harmful consequences, undermining the reliability and trustworthiness of the system. Balancing autonomy with oversight is crucial to prevent misuse.

Mitigation: Implement strict governance policies and oversight for autonomous actions taken by LLMs. Use human-in-the-loop approaches to monitor and approve critical decisions made by the models. Regularly review and update autonomy guidelines to align with evolving risks.

LLM09: Overreliance

Description: Failing to critically assess LLM outputs can lead to compromised decision-making, security vulnerabilities, and legal liabilities.

Explanation: Overreliance on LLMs can occur when users blindly trust the outputs of these models without critical assessment. This can lead to poor decision-making, as well as security and legal issues if the outputs are incorrect or biased. Encouraging critical evaluation of LLM outputs is essential to ensure their reliability.

Mitigation: Encourage critical evaluation of LLM outputs by users. Provide training and guidelines on interpreting and validating model responses. Use ensemble methods and cross-validation to ensure the reliability of critical outputs.

LLM10: Model Theft

Description: Unauthorized access to proprietary large language models risks theft, competitive advantage, and dissemination of sensitive information.

Explanation: Model theft involves unauthorized access to and extraction of proprietary LLMs. This can result in the loss of competitive advantage and the exposure of sensitive information embedded in the model. Protecting models from theft is essential to safeguard intellectual property and sensitive data.

Mitigation: Protect models using encryption both at rest and in transit. Implement access controls and monitor for any signs of unauthorized model extraction attempts. Use watermarking techniques to trace the use of proprietary models and detect theft.


Understanding and mitigating these OWASP Top 10 risks for Large Language Models is crucial for maintaining the security, fairness, and reliability of AI systems. By implementing the recommended mitigation steps, organizations can protect their LLMs from a wide range of threats, ensuring they are used safely and ethically.