CIO Influence
AIOps Big Data Bots/Intelligent Assistants Deep Learning Guest Authors

Why Architecture Matters with Generative AI and Cloud Security

Why Architecture Matters with Generative AI and Cloud Security

Generative AI models such as ChatGPT have captured our imaginations and are poised to revolutionize everything, including cybersecurity. They can help with the chronic talent shortage and more importantly, it can help cybersecurity professionals address the growing complexities of securing their cloud environments.

Cloud security needs all the help it can get

Cloud was supposed to make things easier – what happened? Cloud has made it simple for any developer with a credit card to spin up computing resources and start building something new in a matter of minutes. But with multiple cloud providers, each with hundreds of services, cloud has dramatically expanded the attack surface with a complex web of exposed resources, including compromised CI/CD pipelines and software supply chains, and microservices that are difficult to track and secure. The result is key security insights are hidden within multiple tools, platforms or teams, requiring security practitioners to rapidly connect the dots across multiple domains. This makes it nearly impossible for them to keep pace with the speed of cloud attacks. Often, decisions are made without full context or are made too late which can lead to a larger blast radius and a more significant business impact.

Recommended CIO Blog: The State of Upskilling: Tackling the IT Skills Gap

Chatbots are not enough

With the hype around ChatGPT, many cybersecurity companies have attempted to create “wrappers” around their software, ChatGPT or other similar large language models (LLMs). These types of integrations are able to deliver:

  • Context enrichment: Users can ask an AI chatbot to help with a standalone task. For example, they can feed a compliance violation event to ChatGPT, which can then suggest AWS commands for use in the remediation process. This stateless approach is useful but fairly basic.
  • Query building: Security information and event management (SIEM) back-ends or extended detection and response (XDR) tools have large datasets that are queried using specific query languages and syntax. LLMs are great at translating natural language questions into tool specific query syntax, making it much simpler for anyone to analyze security events and data. Once again this example only provides stateless assistance, which is useful, but doesn’t help with connecting the dots that are required during an incident response.

These types of “wrapper” LLM integrations are useful but are not enough to address the challenges I outlined above. For example, providing more context within a specific event or tool is helpful, but is not enough to connect the dots across multiple events over multiple domains and the various attack paths that bad actors are able to exploit.

Architectural principles for applying generative AI to cloud security

To get the most out of LLMs in cloud security, I believe we need to think about human intelligence and how the best cybersecurity experts operate.

With this in mind, here are five key architectural principles to evaluate security AI tools and chatbots on:

Multi-step reasoning

When bad actors have multiple means of attack, there is usually no single or straightforward answer to any cloud security question. Expert security professionals would explore and investigate multiple approaches in finding a solution or cause. To mimic humans, the best generative AI approach should use LLMs in an iterative process that explores multiple investigative steps before providing an answer.

Multi-domain correlation

Cloud security encompasses multiple domains – from vulnerabilities, compliance violations, runtime events, to CI/CD security – each with numerous data sources and its own formats and semantics. Most users, including experts, are not experts in all these domains. For AI to be helpful here, it needs to understand the jargon and implications of these knowledge silos and then correlate data from these domains into a cohesive story. It should help users fill their knowledge gap and help them connect the dots between seamlessly unrelated events.

Exercise judgment

AI should be smart enough to aid security partitioners in assessing risk across disparate events, prioritize a deluge of tasks, and in making judgment calls and decisions. It should help them understand the scope of an attack, separate the needle from the haystack, and identify correlations.

Proactive suggestions

AI should go beyond one-off stateless answers to a specific question and be able to string together the broader context and intent of an entire line of questioning. With the broader context, it should proactively make suggestions to partitioners on further queries, actions, and next steps helping to get a solution or resolution faster.

Takes action

AI should simplify the human to user interface (UI) interactions by providing a natural language interface to visualize results and execute actions directly in chat. For example, it should be able to help partitioners patch vulnerabilities, guide them through the UI, create a threat detection rule, and more.

Reducing hallucinations and safeguarding privacy

Two common concerns around generative AI are hallucinations and privacy. Hallucinations are when the AI just pulls something from the ether and makes up an answer that oftentimes sounds convincing, but is ultimately incorrect. Additionally, many organizations are concerned about sharing their data to help train LLMs because they fear it may accidentally expose their data and compromise privacy. To address these concerns, a key architecture approach is to use an intermediary controller that mediates user interactions with AI and has the capability to use multiple LLMs to take advantage of the strengths of each. The controller should provide expert guidance, validate the accuracy of the responses from AI to prevent hallucinations. It also acts as a filter to anonymize sensitive user data before it is sent to the LLM to help address privacy issues.

AI makes security practitioners better

This generative AI architecture for cloud security helps bring AI along the side of any security partitioner and presents them with knowledge and context they need in a timely and focused manner. It enhances decision making by combining the collective intuition of human experts and continuous learning from LLMs. It can help better prioritize risks, speed investigation and response times, and simplify cloud security for everyone whether you are a security expert or not.

[To share your insights with us, please write to sghosh@martechseries.com]

Related posts

How to Ensure an Effective Data Pipeline Process

Rajkumar Sen

EdCast Announces New MyGuide Features Using IBM Cloud and Watson Technologies to Power its Intelligent Digital Adoption Solution

Cogniac and Tech Mahindra Partner Combining AI Vision Platform and Integrated Business Insights for Enterprises Globally

CIO Influence News Desk