KELA Unveils New Findings on AI Weaponization in 2025 AI Threat Report
KELA, a global leader in cyber threat and exposure intelligence solutions, today released its 2025 AI Threat Report: How Cybercriminals are Weaponizing AI Technology, revealing a 200% increase in mentions of malicious AI tools on cybercrime forums in 2024. The findings underscore the growing trend of cybercriminals rapidly advancing their AI tactics.
In the past 12 months, threat actors increasingly leveraged LLMs, including ChatGPT, Gemini, DeepSeek, Claud,e and other public GenAI application,s to use dark AI tools to improve business operations and to use jailbreak techniques to bypass public AI systems to conduct malicious activities. The shift in tactics requires a new mindset where organizations must act just as quickly to stay ahead.
Also Read: Quantum Computing In The Now
Key Findings from KELA’s 2025 AI Threat Report:
- Jailbreaking methods are evolving rapidly: Threat actors are continuously refining AI jailbreaking techniques to bypass security restrictions in public AI systems. KELA observed a 52% increase in discussions related to jailbreaking methods on cybercrime forums in 2024 compared to the previous year.
- Threat actors are increasingly leveraging AI in cybercrime forums: KELA’s platform recorded a 200% increase in mentions of malicious AI tools and tactics in 2024, highlighting a growing underground market for AI-assisted cybercrime.
- Dark AI tools are proliferating: Cybercriminals are distributing and selling jailbroken AI models and customized malicious AI tools, such as WormGPT and FraudGPT, to automate phishing, malware creation, and fraud operations.
- AI-driven phishing campaigns are becoming more sophisticated: AI-generated phishing and social engineering tactics have increased in effectiveness, with deepfake technologies being used to impersonate executives and trick employees into executing fraudulent transactions.
- Malware development is becoming more efficient with AI assistance: Threat actors are using AI tools to generate at scale sophisticated, evasive malware, including ransomware and infostealers, making detection and mitigation more challenging for security teams.
“We are witnessing a seismic shift in the cyber threat landscape,” said Yael Kishon, AI Product & Research Lead at KELA. “Cybercriminals are not just using AI – they are building entire sections in the underground ecosystem dedicated to AI-powered cybercrime. Organizations must adopt AI-driven defenses to combat this growing threat.”
Also Read: CIO Influence Interview with Jason Merrick, Senior VP of Product at Tenable
To combat the rising AI-powered cyber threats, KELA urges organizations to invest in employee training, monitor evolving AI threats and tactics, and implement AI-driven security measures including automated intelligence-based red teaming and adversary emulations for Generative AI models.
[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]