CIO Influence
CIO Influence News Machine Learning Security

Lakera Launches the AI Model Risk Index: A New Standard for Evaluating LLM Security

Lakera Launches the AI Model Risk Index: A New Standard for Evaluating LLM Security

Lakera Logo

Lakera, the world’s leading security platform for generative AI applications, today announced the release of the AI Model Risk Index, the most comprehensive, realistic, and contextually relevant measure of model security for AI systems.

Also Read: The Agentic AI Revolution: Top 5 Must-Have Agents for Telcos in 2025

Designed to assess the real-world risk exposure of large language models (LLMs) to attacks, the Lakera AI Model Risk Index measures how effectively models can maintain their intended behavior under adversarial conditions. From AI-powered customer support bots to assistants, the report tests LLMs in realistic scenarios across industries, including technology, finance, healthcare, law, education and more.

“Traditional cybersecurity frameworks fall short in the era of generative AI,” said Mateo Rojas-Carulla, co-founder and Chief Scientist at Lakera. “We built the AI Model Risk Index to educate and inform. Enterprises deploying AI systems must completely rethink their approach to securing them. Today, attackers don’t need source code, they just need to know how to communicate with AI systems in plain English.”

Most risk assessment approaches focus on surface-level issues: testing prompt responses in isolation and with context independent static prompt attacks that focus on quantity and not on context or quality. By contrast, the Index asks a more practical question for enterprises: how easily can this model be manipulated to break mission-specific rules and objectives and in which type of deployments?

The difference is critical.

Within the report, you will find:

  • Real-world attack simulation models how adversaries target AI systems through multiple attack vectors, including direct manipulation attempts through user interactions and indirect attacks that embed malicious instructions in RAG documents or other content the AI processes.
  • Applied risk assessment focuses on measuring whether AI systems can maintain their intended purpose under adversarial conditions. The evaluation tests the model’s consistency in performing its designated role, which is essential for enterprise deployments where predictable behavior drives business operations and regulatory compliance.
  • Quantitative risk measurement provides clear scoring that enables relative analysis between different AI models, tracks security improvements or degradations across model versions and releases, and delivers standardized metrics for enterprise security evaluation.

Key findings

The results reveal that newer and more powerful versions of large language models are not always more secure than earlier ones, and that all models, to some extent, can be manipulated to act outside their intended purpose.

Also Read: Why Cybersecurity-as-a-Service is the Future for MSPs and SaaS Providers

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Related posts

EnGenius Switch Extenders Now Shipping: Powering Simple Network Expansion and Connectivity

PR Newswire

Redwire and BigBear.ai Sign MOU for Development of Advanced Cyber Resiliency Capabilities for Future Space Systems

CIO Influence News Desk

TalaTek SaaS GRC Solution, TiGRIS, Now StateRAMP Authorized