CIO Influence
AIOps Big Data Business Intelligence Guest Authors Security

How Generative AI Upped the Stakes in the Cyber Arms Race

GTT Elevates Its Global IT Operations With AIOps From Selector

The new paradigm of cybersecurity is set in AI-based tools and techniques, and there’s no going back. But, that’s a good thing — for both hackers and the good guys.

Rapid advancements made in the field of generative AI over the last two years have taken the cybersecurity arms race nuclear. Previous techniques and tools used by attackers and defenders relied on either imperfect attacks at scale — like poorly written phishing messages blasted to emails and phone numbers at random — or highly specific, targeted attacks aimed at one person or organization.

The limitations of these attacks allowed cybersecurity practitioners to, for the most part, understand how to warn or defend against them. Attack types like social media manipulation or intrusive malware could be thwarted with up-to-date security operations and employees who are well-trained in security awareness. But generative AI has taken our adversaries’ game up a notch, enabling them to create entirely new schemes using tried-and-true attack vectors like phishing, malware and social engineering that move at the speed of data and with precision.

Thankfully, generative AI has also enabled the good guys to step up their defenses. The pattern recognition capabilities of generative AI allow for practitioners to detect anomalies and unexpected intrusions as fast as the data moves, something that wouldn’t be possible with only people-based detection services. Research even shows that organizations using artificial intelligence (AI) and automation had a 74-day shorter breach life cycle than those organizations who don’t, saving an additional $3 million over those non-AI organizations in breach costs in the process.

That’s why those organizations on the cutting edge of security are excited about the prospect of AI, even knowing that threat actors will use it for their gain as well.

Attackers, for example, are likely to leverage generative AI to develop malware and malicious tools that can evade modern endpoint protection platforms and detection and response tools. By using online AI-based models similar to ChatGPT, attackers can also pull information from every corner of the web to craft messages that are both accurate and highly targeted to a specific organization or public figure — making them virtually indistinguishable from a real message. The volume and scale of these automated, but precise, campaigns will enable attackers to spend a little time targeting many people.

And it’s not only emails and text messages — or visual or auditory deepfakes — that attackers plan to leverage AI to create to fool their victims. The power of fake news and social engineering scams is well known, with 98% of attacks involving some sort of social engineering, and you can bet that savvy attackers are going to want to use AI to improve the efficiency of their social engineering scams. Creating realistic-looking fake news stories or social media profiles can sway public opinion or wreak temporary havoc on organizations or institutions, influencing people to choose to vote a certain, spend their money or spread lies unwittingly.

We’ve already seen these kinds of attacks happen on social media within the last two presidential election cycles, and AI’s ability to accelerate the scale of attacks means future social engineering scams will be more coordinated and comprehensive than ever before.

So, to combat attackers putting their new AI technology to work scamming their targets, organizations need to stay in-the-know about the limits of their own security tools. Ensuring that they follow security basics like patching their software regularly, implementing multi-factor authentication and avoiding clicking on sketchy links will go a long way toward staying safe. But, when security inevitably comes down to AI versus AI, exclusively human-driven engineering attempts will not be able to keep up with the volume and sophistication of AI-generated attacks like this.

Related posts

Big Data Startup Zenysis Raises $13 Million Series B

Exabeam Announces Support of FirstBoard.io to Increase Women Leaders on Technology Boards

CIO Influence News Desk

Darktrace Cyber AI Analyst Now Runs Open Investigations

CIO Influence News Desk