Five Reasons You Should Take GenAI Security Seriously

Enabling AI adoption without ‘running with scissors’

Willy Leichter

May 1, 2024

Five Reasons You Should Take GenAI Security Seriously

Subscribe to AppSoc

Get the best, coolest, and latest in design and code delivered to your inbox each week.

The generative AI revolution is upon us and here to stay. The explosive growth of ChatGPT and other services has demonstrated that GenAI is powerful, fun, alarming, and unpredictable. But the rapid adoption of GenAI applications by businesses feels a bit like running with scissors (uniquely illustrated by another AI-generated blog image above). 

Businesses across sectors are seeing the potential of GenAI, experimenting with use cases, and in many cases, already rolling out initial public-facing applications. Competitive pressures and a healthy dose of FOMO (fear of missing out) is leading to unprecedented rapid adoption.

This genie is not going back in the bottle. While there are valid concerns about the implications of AI, trying to block or slow down creative applications of this game-changing technology will inevitably fail. Think back to the early 2000’s – when many companies tried to block users from freely surfing the web at work. Ten years later, as SaaS applications took off, some organizations tried to buck this trend because of security and compliance concerns. But both legitimate sentiments were quickly overrun by a stampede of users. 

That doesn’t mean we should be complacent about security. In the rush to deployment, it’s easy to ignore the new security risks that GenAI and LLM stacks will expose us to. Because trying to slow down the AI train will fail, we need to accelerate the development and adoption of new security technologies to keep up.

Following are five reasons why organizations must get serious about AI application security. Others will certainly emerge, but let’s start by solving for what we can imagine, while AI continues to expand our horizons.

1. Unprecedented speed of adoption

Analyst firm Gartner recently surveyed over 1,000 enterprise security professionals about their levels of investment in Generative AI initiatives.While the majority are still at the Investigation, or Piloting stage, the percentage of enterprises that already have gone live with GenAI solutions more than doubled in 4 months – from 10% to 21%. And that was 4 months ago, so it’s likely that number has doubled again. Also notable is that only around 5% of respondents were not considering AI initiatives.

Gartner: Enterprise Investments for Generative AI Initiatives

While the majority are still at the Investigation, or Piloting stage, the percentage of enterprises that already have gone live with GenAI solutions more than doubled in 4 months – from 10% to 21%. And that was 4 months ago, so it’s likely that number has doubled again. Also notable is that only around 5% of respondents were not considering AI initiatives.

This rapid adoption of AI will inevitably lead to security gaps, mistakes, and an expanded attack surface area. This has happened with almost every new wave of digital technology – it’s rolled out in a hurry, and then security scrambles to keep up. To quote Shakespeare, “They stumble that run fast.” (Yes, of course I got this quote from ChatGPT).

2. New players and a lack of visibility

AppSOC Blog: Hugging Face LLM Categories
Hugging Face LLM Categories

It’s understandable that this radically new technology is being driven by new types of experts - often bypassing established development and security channels. AI projects are typically driven by Data Scientists and other innovators, especially in the early days when they are experimenting with LLMs, data models, and capabilities. These projects are often pushed forward by business owners, who see the need to move rapidly to match or beat the competition. 

Naturally, innovators want to work with as much freedom as possible, and without the initial constraints of more closely monitored development processes. A prime example of this is Hugging Face, which provides open-source LLMs and datasets which developers can download for free, fine tune, and share back their results.

Growth of LLM Models available for free download from Hugging Face

Hugging Face is enormously popular, providing thousands of models across hundreds of categories, types, and topics. In the last year, the number of LLMs on Hugging Face has grown more than 5X, with over 620,000 models currently available for free download.

Obviously, there’s a downside of this ungoverned Wild West of sharing enormous datasets. Despite nominal security checks provided by Hugging Face, security experts have already identified hundreds of models that contain malware, can open backdoors, or exploit other security gaps. This type of digital playground is a natural place for attackers to plant malware or plan attacks.

3. AI applications are software applications 

As we discussed in a previous blog, the NSA, FBI, and other international cyber agencies recently provided pragmatic security guidance that emphasized the importance of not skipping fundamental security best practices, while we wrestle with the additional new challenges of LLMs. According to the document:

“Malicious actors targeting AI systems may use attack vectors unique to AI systems, as well as standard techniques used against traditional IT. Due to the large variety of attack vectors, defenses need to be diverse and comprehensive.”

Because these new applications are built on software stacks, they will inevitably contain vulnerabilities that need to be identified, prioritized, and remediated. Before putting GenAI applications into action, organizations need to make sure their IT setups follow strong security principles, like having a good governance structure, a well-thought-out architecture, and secure configurations.GenAI stacks also greatly expand the software supply chain risks that have plagued enterprises in the last few years. LLMs are built on huge datasets, which are often compiled and trained by outsiders. Until we have clear visibility and governance into the provenance of the data used in AI applications, we will be giving attackers a huge opportunity for malicious activity.

4. New attack vectors for LLMs

Fortunately, standards bodies like OWASP and MITRE have recognized these evolving risks and have created new versions of their frameworks adapted for LLM applications. While these frameworks will evolve over time, the fact that they have already been created is a clear sign of the high levels of concern about AI security.

The OWASP Top 10 for LLM Applications is largely new, with only a couple similarities with the broader OWASP Top 10. They have recognized a range of new types of vulnerabilities and risks that are unique to LLMs, such as Prompt Injection, Training Data Poisoning, Model Theft, and Overreliance. 

A screenshot of a computerDescription automatically generated
OWASP Top 10 for LLMs

OWASP also recognizes the increased risks of Supply Chain Vulnerabilities. While this is not new to LLMs (and should arguably be included in the original Top 10), the supply chain is considerably larger for GenAI, with thousands of models and datasets readily available, containing billions of data points. It is difficult to determine the source and maintain the integrity of this vast amount of data.

MITRE has also released the ATLAS Matrix which applies their long-standing kill chain methodology to GenAI. Most of the stages in the kill chain are similar to those in the MITRE ATT&CK matrix, starting with Reconnaissance and ending with Exfiltration and Impact.

However, ATLAS introduces two new stages unique to LLMs including ML Model Access, and ML Attack Staging. 

When you drill down into specific attack techniques only 12 of the 56 methods are inherited from MITRE ATT&CK, recognizing the unique nature of many emerging GenAI threats such as LLM Prompt Injection, LLM Jailbreaks, and Eroding LLM Model Integrity.  

AppSOC Blog: MITRE ATLAS Matrix
MITRE ATLAS Matrix for AI Threats

5. Existing attack vectors we can’t ignore

While there are many new documented vectors for attacking LLMs, and inevitably more will be discovered over time, they are also built on software applications, with custom code, imperfect developers, and porous supply chains. And these new AI applications don’t exist in isolation – they are connected to many core business applications to enhance existing functionality.

This means that all the existing software security frameworks and the new ones unique to LLMs apply. For example, given the newness of tools like AWS SageMaker, and the Azure OpenAI Service, security misconfigurations are a significant risk. There are countless examples of users leaving sensitive data exposed on AWS, or GitHub, because of basic misconfigurations.

AppSOC Blog: Comparison of OWASP Top 10 with OWASP Top 10 for LLMs
Comparison of OWASP Top 10 with OWASP Top 10 for LLMs

The MITRE ATT&CK matrix includes 235 techniques, the majority of which apply to both conventional software, and LLMs.

Join us at RSA 2024

During this year’s RSA conference, AppSOC and ThreatConnect will be hosting an Innovation Lunch, focusing on GenAI security challenges, as well as Cyber Risk Governance, and Risk Quantification. We will also be doing preview demonstrations of new tools to secure LLM applications. Space is limited so please reserve your spot in advance.