Contrast Security (Contrast), the code security platform built for developers and trusted by security, announced it will extend its market-leading application securityย testing (AST) platform to support testing ofย Large Language Modelsย (LLMs) fromย OpenAI. In this first release, Contrast rules help teams that are developing software using theย OpenAIย application programming interface (API)ย set to identify and mitigate weaknesses that could expose an organization toย prompt injectionย vulnerabilities: i.e., attacks involving injection of a prompt that deceives the application into executing unauthorized code.
Prompt injection was identified as the top risk for LLM applications by the just-releasedย OWASPย 10 Top forย Large Language Modelย Applications project.ย Contrastย has continued to supportย OWASP‘s mission to improveย Application Securityย (AppSec): In fact,ย Contrast‘s Chief Product Officerย Steve Wilsonย led the 400-person volunteer team that created theย OWASPย Top 10 for LLMs.
CIO INFLUENCE: CIO Influence Interview with Herb Kelsey, Federal CTO at Dell Technologies
“As project lead for the newย OWASPย Top 10 for LLMs, I can say our group looked deeply at manyย attack vectorsย against LLMs. Prompt Injection repeatedly rose to the top of the list in our expert group voting for the most important vulnerability,” saidย Wilson. “Contrastย is the first security solution to respond to this new industry standard list by delivering this capability. Organizations can now identify susceptible data flows to their LLMs, providing security with the visibility needed to identify risks and prevent unintended exposure.”
According to theย OWASPย Top 10 forย LLMs, a prompt injection vulnerability allows an attacker to craft inputs that can manipulate the operation of a trusted LLM.This results in the LLM acting as a “confused deputy” on behalf of the attacker. Given the high degree of trust usually associated with an LLM’s output, the manipulated responses may go unnoticed and may even be trusted by the user, allowing the attack to potentially poison search results, deliver incorrect or malicious responses, produceย malicious code, circumventย content filters, or to leak sensitive data.ย Prompt injections can be introduced via various avenues, including websites, emails, documents or any other data source that anย LLMย might rely on.
Contrastย is ideal for identifying all types of injection accurately, including this new form of AIย prompt injection.Contrastย uses runtime security to monitor actual application behavior and detect vulnerabilities, rather than scanningย source codeย or simulating attacks.This approach is fast, easy and highly accurate, ensuring that developers are instantly notified of issues and provided all the information they need to correct problems.ย User inputย sent throughย OpenAI’sย officialย Pythonย API to anย LLMย in aย Pythonย agent-instrumented application triggers theย prompt injectionย rule.
CIO INFLUENCE: Top Challenges for CTOs in 2023
[To share your insights with us, please write toย sghosh@martechseries.com]

