NSA Offers Timely Guidance on Deploying AI Systems Securely

AI FOMO can lead to security gaps and shortcuts

Willy Leichter

April 23, 2024

NSA Offers Timely Guidance on Deploying AI Systems Securely

Subscribe to AppSoc

Get the best, coolest, and latest in design and code delivered to your inbox each week.

The recent explosion of Generative AI, built on Large Language Models (LLMs) into the IT mainstream has many security professionals concerned. This movement is going far beyond the casual use of ChatGPT, as many enterprises are adding GenAI capabilities to core business applications. But as with any rapid tech shift, AI FOMO will inevitably lead to security mistakes and shortcuts.

While many people are raising broader concerns about AI, not enough are focusing on security fundamentals. GenAI brings in new systems, massive amounts of data, and new players – often data scientists, who are not well versed in security. At the same time, many security professionals are only vaguely aware of fast-moving AI projects.

GenAI initiatives are rapidly moving from theoretical to deployment. A recent Gartner survey found that while many organizations are just investigating, or piloting GenAI capabilities, the number of businesses with LLMs in production is rapidly growing - from 10% in Sept 2023, to 21% in Jan 2024 – more than doubling in four months. And that was four months ago – that number has likely doubled again. Anecdotally, we’ve heard from enterprise CISOs, that their teams already have hundreds to thousands of LLMs in-house, without much governance, or security oversight. 

Into this maelstrom, it was encouraging to see the NSA (in collaboration with CISA, the FBI and cybersecurity agencies from Australia, Canada, New Zealand, and the UK) publish last week a pragmatic guide to Deploying AI Systems Securely. This document should help bring clarity where there currently is a lot of talk and confusion. It also focuses squarely on organizations that are deploying GenAI applications, while typically bringing in training and model data from outside sources.

AI systems are software systems

The most refreshing line from the report is that “AI systems are software systems.” That sounds obvious, but when GenAI tools are treated as entirely unique, many of the security basics can be overlooked. AI certainly introduces many new elements, but these still must be secured from the ground up. 

Deploying new AI systems securely requires paying careful attention to how they're set up and configured. This depends a lot on the complexity of the AI system, what kind of resources are needed, and whether the infrastructure is on-site, in the cloud, or a mix of both. 

A significant focus of the report is on getting the security basics right before deploying GenAI tools. Organizations should work closely with IT to make sure that AI systems are set up in environments that meet their IT standards. This involves understanding the risk level of the organization and making sure the AI system fits within this risk framework. It also means making sure everyone knows their role and what they're responsible for when it comes to AI security.

The report also stresses the importance of a solid security framework. Before putting GenAI systems into action, organizations need to make sure their IT setups follow strong security principles, like having a good governance structure, a well-thought-out architecture, and secure configurations. Special attention should be given to securing the areas where the IT environment and AI systems meet. Recommendations include using access controls to manage who can interact with AI model weights and limiting access to a small number of authorized people.

Besides, the deployment of AI systems should include ongoing protection strategies, using advanced threat detection and response mechanisms across the whole business. This includes ensuring 24/7 protection and integrating strong cybersecurity practices at every stage of the AI system’s life cycle.

Extending application security tools and best practices 

Part of the challenge with AI hype is that many security vendors are treating GenAI as an entirely new field that requires completely novel security tools. While we will certainly need to develop new detection techniques to find specific issues within LLMs, the overall requirements are a direct extension of what Application Security Posture Management (ASPM) tools already provide: ingesting security data from multiple sources, tracking vulnerabilities across the product lifecycle, prioritizing issues based on business-specific context, and automating the remediation process so critical issues don’t fall through the cracks and are resolved as quickly as possible.

Rather than getting overwhelmed by the hype and newness of GenAI, it’s time to take a deep breath, apply your best application security, vulnerability management, and governance tools and processes, and thoroughly scrutinize AI software and LLM data supply chains. Building on this foundation, while applying new detection capabilities unique to AI, should provide visibility, clarity, and peace of mind, as we expand application security tools for the GenAI era.