CIO Influence
Analytics Guest Authors Machine Learning Natural Language Security

How to Stop AI From Fueling Insider Risk

How to Stop AI From Fueling Insider Risk

In the last six months alone, the use of artificial intelligence (AI) in the workplace has doubled. Now, three-quarters of the workforce uses AI tools, while more than 60% use generative AI in their daily workflows. The rapid pace of adoption is hardly surprising, as AI promises to automate repetitive tasks, augment information creation and decision-making, streamline customer interactions, and more. And yet, there’s reason for caution as well.

Also Read: Bots and Deepfakes: How Do We Navigate the New Era of Digital Identity?

The hype around AI must not overshadow the fact that the technology expands the threat landscape in several different ways. Thus, it’s more important than ever for organizations to ensure their cybersecurity is keeping up with the times—which means using AI to its full potential to bolster cybersecurity defenses and offset the expanded attack surface.

AI and the Threat Landscape

AI affects the threat landscape in multiple ways. To start, there’s the simple reality that AI relies on data inputs to train any given model. This creates a new form of insider risk, as employees may accidentally share sensitive company data or credentials with the model. Samsung employees, for example, leaked sensitive company information on three separate occasions, reportedly causing leaders to ban the use of OpenAI altogether.

Almost every major AI assistant was recently shown to be susceptible to attacks as well. Essentially, adversaries can see all data that passes between the algorithm and the user, even if it’s encrypted. Hackers can also infiltrate the large language models (LLM) that power generative AI. Feeding a model inaccurate data can lead to inaccurate outputs, which is known as data poisoning. The data used to train AI models may also be biased, which can negatively impact outputs as well.

On top of these risks, adversaries can also leverage AI to create more compelling social engineering attacks. In addition to making phishing attacks more personalized and thus more difficult to detect, AI can be used to identify which individuals within an organization are most susceptible to being socially engineered—and to create campaigns based on previously successful attacks on people in similar roles. Deepfakes, for one, can be extremely convincing. One finance worker at a large firm was tricked into paying $25 million to a person he thought was the company’s CFO.

Also Read:  Unlocking Growth and Profitability: Creating High-Performance Operations Through Organization-Wide Data Observability

Addressing Insider Risk

In the wake of this rapidly evolving threat landscape, it’s more urgent than ever for companies to ensure they have the right technology, policies, and training in place to combat the rise of AI-fueled insider risk and social engineering attacks. To start, companies need to fight fire with fire. This means responsibly deploying AI and machine learning themselves to, first, have a real-time objective understanding of normal employee behaviors and, in turn, be able to detect any anomalies that may suggest a bad actor (external or internal) has gained access to sensitive company data and systems. User activity monitoring and behavioral analytics are the most effective ways to combat social engineering attacks and other forms of insider risk, but they require companies to have a holistic view of all employees. Otherwise, spotting anomalies and remediating risky behavior is not possible.

At the same time, AI can also improve the efficiency and effectiveness of security analysts, particularly those who are new to the job. When new analysts are onboarded at a company, it can take anywhere from hours to months to get them up to speed. With AI, that time is dramatically reduced. A new analyst can open a case or event, and then lean on AI to find the best way to start drilling down into the data. Just as bad actors can use AI to identify the most likely target of successful social engineering attacks, security analysts can and must use AI to understand and identify the best playbook for mitigating them.

Mitigating insider risk is not solely in the purview of security analysts, though. Instead, it’s an all-hands-on-deck affair. Companies must train all employees on the risks associated with AI and update their policies in accordance with them. Describing different types of social engineering attacks—and recommending that employers always use separate segmented channels when available for the most critical data—can help minimize a company’s risk exposure. But the undertaking doesn’t stop there. The next step is for companies to leverage AI to analyze and understand the effectiveness of the technology, policies, and training that have been put in place. Otherwise, the entire endeavor can be extremely expensive, without any hard numbers to prove that it’s paid off.

The Bottom Line

Altogether, AI holds tremendous promise but presents newfound risks as well. The only way to offset the expanded threat network is to deploy AI with cybersecurity in mind. The technology can help detect anomalous behaviors, improve analyst efficiency, and measure the effectiveness of those efforts—in turn serving as a defense against a wide range of AI-fueled risks.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Related posts

Acquia Renews Drupal Steward Program Support, Brings Critical Security Fixes to Enterprises First

Genedata Announces Licensing Agreement with Gilead Sciences to Enhance Data Science Innovation in Discovery Research

Cision PRWeb

The Valence SAAS Security Platform Is Now Available in the Microsoft Azure Marketplace

GlobeNewswire