CIO Influence
Guest Authors Security Technology

Are You Covered? Large Language Models and AI Security in the Enterprise

Are You Covered? Large Language Models and AI Security in the Enterprise

Emerging artificial intelligence (AI) and machine learning (ML) tools are gaining swift adoption by organizations and employees because they are increasingly shown to improve productivity and enhance efficiency by streamlining labor-intensive processes, and to boost innovation and creativity. A recent S&P Global survey of 1,500 decision-makers at large companies found that 69% have at least one AI/ML project in production, with 28% having reached enterprise scale with the project “widely implemented and driving significant business value;” 31% have projects in pilot or proof-of-concept stages. The McKinsey Institute predicts the annual economic value produced by generative AI (GenAI) tools globally will be $4.4 trillion.

It’s safe to say AI is a big topic in the boardroom.

And, here on the ground, we see evidence that these numbers are spot on, with companies considering GenAI usage from one of three distinct mindsets:

  1. Not now, maybe never: Blocking it until they can learn more
  2. Yes now, but slowly: Engaging proactively by establishing deployment plans supported by policies, use cases, funding, etc.
  3. I guess we’re using AI now: Stumbling into it with accidental, bottom-up employee usage driving wider adoption

Irrespective of the organizational approach, one common denominator is that adopting GenAI, including large language models (LLMs), such as ChatGPT and others, carries significantly increased risk for the organization.

The most pressing risks are:

  • Poor access controls, so team-specific or user-specific access to an LLM cannot be granted
  • Poor governance and non-compliance with data privacy regulations, such as GDPR, CCPA, and HIPAA, as well as company policies
  • No tracking, auditing, monitoring, or oversight capabilities for admins; prompt/response history is limited to the user; there are no metrics to show cost or usage, or if the model is returning accurate or factual information
  • Data leakage when confidential information is included in prompts
  • Malicious code in LLM responses, such as malware, viruses, phishing attempts, or spyware
  • No transparency/explainability, as model operations are “black boxes”

The Five Critical Requirements for Event-Driven Architecture Success

Creating a strong AI governance framework that includes policies, guidelines, and both operational and technical controls is a critical step toward establishing a secure AI environment in your organization. For each of the risks identified above, there are specific solutions that must be considered, including:

Observability is rapidly emerging as a critical issue in the LLM security ecosystem. With so many people in an organization using multiple models for myriad uses on all manner of devices, tracking and auditing usage is critical. Typically, administrators do not have the ability to view users’ prompt histories, which means they can only know in the most general terms who is using the model and how often. Tools that provide deeper insights, such as what an individual user or groups of users are using the model for, how often ML prompts trigger alerts or are blocked, and why, are invaluable. Model performance, including comparative analyses, on metrics such as response time, accuracy, and reliability can provide depth to discussions of resource allocation. Automating oversight to identify when LLM-generated content is factual and when it is a “hallucination” is increasingly important, as well.

Private AI and the Implementation Roadmap

Acceptable Use” can cover a huge range of issues. Topics that are typically identified first mention users writing prompts that include or request profanity, p**********, bias, derogatory, stereotypes, or any number of generally inappropriate terms, phrases, or topics. Scanners or filters can solve this, although most models have default lists that are fixed and do not allow for customization. This can be problematic, for instance, if prompts written by a medical researcher include terms the model identifies as inappropriate. However, the range of unacceptable content can go far beyond terminology to include prompts requesting information from the LLM that is illegal, unethical, criminal, or simply dangerous. There is a whole subculture of hackers who craft carefully worded prompts, called “jailbreaks,” that attempt to bypass the model’s internal controls that prohibit it from providing such content, thus enabling the hacker to control the model’s behavior and output. Having internal controls that can identify the tactics used by such practitioners, such as reverse psychology, role-playing, or world-building, is key to preventing the consequences of these dangerous attacks.

Preventing Data Loss requires educating users about what content should not be included in prompts that go to public LLMs. Including sensitive, personal (names, birthdates), proprietary (contracts, IP, source code), or otherwise confidential data (API keys, acquisition targets) in prompts means that information is no longer inside the organization’s security perimeter, inviting privacy and security concerns. Consider implementing automated prompt-scanning tools that block or redact specific content or alert the user that the prompt content violates company policy.

Preventing malicious LLM responses from infecting the organization with malware; outdated or poor-quality code; or other threats requires scanners or filters focused on the incoming content. Models are becoming increasingly complex and code that is hidden or disguised could be in any one of dozens of computer languages; scanners used to detect code must be able to discern the language in which it’s written.

Check Point’s Horizon Playblocks Eliminates Security Siloes and Stops Attacks From Spreading

AI is the way forward for every organization and can be utilized by everyone, but organizations must plan their usage of these powerful tools with eyes wide open. By watching and learning how other organizations are using LLMs, companies can begin to determine how they will best work within their own. Organizations must also act with appropriate speed to put security measures in place to ensure their people, processes, and property remain secure.

[To share your insights with us, please write to sghosh@martechseries.com]

Related posts

Executives’ Ransomware Concerns Are High, But Few Are Prepared for Such Attacks

CIO Influence News Desk

CYE Launches Comprehensive Security Solution: CTI and IR Group

CIO Influence News Desk

Axonius Adds SaaS Management Actions and Automations to Help Organizations Mitigate Risk and Optimize Spend

Business Wire