Zenity, the leader in securing enterprise copilots and low-code development, has announced a new security framework, the GenAI Attacks Matrix. The open-source project, inspired by MITRE ATLAS and spearheaded by Zenity with help from many of the world’s leading security researchers, is focused on attacks that target the users of various GenAI systems, examining how AI systems interact with and on behalf of their users, and vice versa.
Also Read: BigID Continues Growth in APAC Region Through M.Tech Alliance
While many well-known security frameworks have historically taken an endpoint-driven approach, with the introduction of enterprise copilots and GenAI systems, security teams need a purpose-built framework to help them defend against the ensuing new wave of risks. This project’s scope includes any system that uses GenAI, allows for GenAI to make decisions, and interfaces with or is operated by users (or on their behalf, in the case of agentic AI) and is built towards helping security practitioners understand and contextualize their risk. This explicitly includes licensable AI systems such as ChatGPT Enterprise, GitHub Copilot or Microsoft 365 Copilot, extensions and agents anyone can build with low-code/no-code tools, and custom AI applications built for specific use cases.
Zenity co-founder and CTOÂ Michael Bargury, said, “What we’re hoping to do here is bring the leading AI security researchers together in order to take a focused approach to GenAI systems. Our aim is to collectively document discovered attack techniques in order to clarify the threats to help enterprises devise corresponding mitigation and risk management strategies. AI changes every day, and it is critical that we share information about potential attacks as soon as they are discovered, before they are observed in the wild. I am proud to announce this project and look forward to collaborating with the security community.”
Bargury, who also founded the OWASP Low-Code/No-Code Top 10, realized that as the gold rush to place AI in the hands of all business users surges on, it is clear that security for AI is still a great unknown. By letting GenAI act on behalf of business users, enterprises have unwillingly opened up new attack pathways for adversaries to target powerful systems that inherently contain access to loads of corporate and sensitive data and are curious by nature. Attackers are exploiting these systems with promptware, which is content with hidden malicious instructions that gets picked up and acted on by AI apps.
This project aspires to lay the foundation for security teams that need to adopt a defense-in-depth approach focused on malicious behavior rather than malicious static content. The primary goal of this project is to document and share knowledge of those behaviors and to look beyond prompt injection at the entire lifecycle of a promptware attack.
[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]