CIO Influence
CIO Influence News Machine Learning

The Rise of the Zero-Knowledge Threat Actor: New LLM Jailbreak Technique Discovered by Cato Networks Enables Easy Creation of Password-Stealing Malware

The Rise of the Zero-Knowledge Threat Actor: New LLM Jailbreak Technique Discovered by Cato Networks Enables Easy Creation of Password-Stealing Malware

Cato Networks, the SASE leader, today published the 2025 Cato CTRL™ Threat Report, which reveals how a Cato CTRL threat intelligence researcher with no prior malware coding experience successfully tricked popular generative AI (GenAI) tools—including DeepSeek, Microsoft Copilot, and OpenAI’s ChatGPT—into developing malware that can steal login credentials from Google Chrome.

Also Read: Making Microsoft SQL Server HA and DR Completely Bulletproof

To trick ChatGPT, Copilot, and DeepSeek, the researcher created a detailed fictional world where each GenAI tool played roles—with assigned tasks and challenges. Through this narrative engineering, the researcher bypassed the security controls and effectively normalized restricted operations. Ultimately, the researcher succeeded in convincing the GenAI tools to write Chrome infostealers. This new LLM jailbreak technique is called “Immersive World.”

“Infostealers play a significant role in credential theft by enabling threat actors to breach enterprises. Our new LLM jailbreak technique, which we’ve uncovered and called Immersive World, showcases the dangerous potential of creating an infostealer with ease,” said Vitaly Simonovich, threat intelligence researcher at Cato Networks. “We believe the rise of the zero-knowledge threat actor poses high risk to organizations because the barrier to creating malware is now substantially lowered with GenAI tools.”

The growing democratization of cybercrime is a critical concern for CIOs, CISOs, and IT leaders. The rise of the zero-knowledge threat actor is a fundamental shift in the threat landscape. The report shows how any individual, anywhere, with off-the-shelf tools, can launch attacks on enterprises. This underscores the need for proactive and comprehensive AI security strategies.

Also Read: The Arbitrage Opportunity of Small Language Models: Unlocking AI Efficiency and Performance

“As the technology industry fixates on GenAI, it’s clear the risks are as big as the potential benefits. Our new LLM jailbreak technique detailed in the 2025 Cato CTRL Threat Report should have been blocked by GenAI guardrails. It wasn’t. This made it possible to weaponize ChatGPT, Copilot, and DeepSeek,” said Etay Maor, chief security strategist at Cato Networks. “Our report highlights the dangers associated with GenAI tools to educate and raise awareness, so that we can implement better safeguards. This is vital to prevent the misuse of GenAI.”

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Related posts

Dataprise Acquires Wireless Watchdogs, Adding Industry Leading Mobility Managed Services to Its Strategic IT Managed Services Portfolio

Opti9 Observr Verified By CyPROS to help Prevent Ransomware Attacks

Business Wire

CSA Security Trust Assurance and Risk (STAR) Registry Reaches Notable Landmark with 1,500 Entries

CIO Influence News Desk
StatCounter - Free Web Tracker and Counter