CIO Influence
Digital Transformation Guest Authors IT services Machine Learning Natural Language Security

Combatting the rise in AI-assisted fraud in 2025

Combatting the rise in AI-assisted fraud in 2025

As the cybersecurity landscape evolves, AI continues to reshape the world of fraud and cybercrime. Our 2025 Identity Fraud Report at Entrust reveals an alarming reality: deepfake attempts are now occurring every five minutes, while digital document forgeries have skyrocketed by 244% within the past year.

With Gen AI becoming increasingly accessible, cybercriminals are using this tech to create hyper-realistic AI-assisted fraud such as deepfakes and synthetic identities that are becoming increasingly difficult to detect. With sites such as OnlyFakes, the days of amateur deepfakes and phishing attempts are in the past; we’re now confronting a growingly sophisticated ecosystem of fraud-as-a-service (FaaS.)

Also Read: Ensuring High Availability in a Multi-Cloud Environment: Lessons from the CrowdStrike Outage

The Identity Fraud Report underlines a critical next step for businesses: vigorous identity verification is no longer optional. It’s essential to protect digital interactions in an environment where the continual upward growth of AI has become inevitable.

AI-assisted fraud is a growing enemy

With the rise of AI-assisted fraud comes the ease and accessibility of its use. Organisations may be using tools like Copilot and Claude.ai to boost productivity amongst employees, but cybercriminals are also using tools to craft convincing scams. For the first time, AI has sparked a rise in digital forgeries, which now has eclipsed physical counterfeits as the most prevalent form of document fraud, totalling 57% of cases. This means fraud is now no longer limited to professional crime rings, it’s becoming a marketplace. Experienced fraudsters can now readily sell their knowledge, tools, and stolen data online for a fee, lowering the barrier of entry for amateur criminals.

Gen AI tools such as ChatGPT, and malicious clones like WormGPT, have accelerated this trend. These tools allow fraudsters to rapidly produce phishing emails, fake documents, and even synthetic digital identities at scale. Digital forgeries are not only cheaper and easier to create than physical counterfeits but also more scalable, with fraudsters using stolen document templates and AI-powered editing tools to manipulate data. For example, a criminal might use Gen AI to produce a hyper-realistic deepfake image, video or audio recording of a relative or celebrity to con people out of their money and personal details. By creating these tamper-resistant documents, AI has paved the way for bad actors to exploit our trust and create more sophisticated and fail-safe scams.

Top industry targets

Entrust’s Identity Fraud Report 2025 revealed that the most targeted industries this year, were all related to financial services: cryptocurrency, lending and mortgages, and traditional banks. 

The most prominent target was cryptocurrency, with fraud targeting crypto platforms and services soaring. In fact, cryptocurrency was hit by a 9.5% rate of fraud in cryptocurrency in 2024 – meaning that nearly one in 10 of all identity verification attempts was suspected to be fraudulent. This could be due to Crypto reaching an all-time price high in 2024, making it very attractive to fraudsters.

Lending and mortgages and traditional banks were the next biggest target, with a 5.4% and 5.3% fraud attempt rate respectively. Traditional banks experienced a 13% increase from last year in fraudulent onboarding attempts, which coincides with worldwide economic turbulence and high inflation rates, perhaps suggesting that the financial squeeze on consumers is prompting more fraudulent activity. The biggest factor, however, is likely to be the increasing accessibility of easy-to-use AI tools.

Strengthening security for 2025

Combatting fraud requires a proactive, multifaceted approach. The key lies in implementing strong identity verification (IDV) processes, especially at onboarding to establish trust at first interaction. A layered approach to IDV, combining document verification, biometric checks, repeat fraud detection and passive fraud signals, means you can effectively identify and mittgate risks before they even become a threat.

However, fraud prevention doesn’t stop at onboarding. It’s important to continue to monitor the entire customer lifecycle to help protect against account takeovers and fraudulent transactions. This could include implementing tools like biometric authentication at critical moments to add another layer of security.

Also Read: The Arbitrage Opportunity of Small Language Models: Unlocking AI Efficiency and Performance

Ironically, AI tools are a double-edged sword. Not only can they be leveraged by bad actors for malicious purposes, but they can also be used as a powerful weapon in the fight against fraudulent activity.

For example, AI and machine learning solutions can be used to recognise patterns and detect anomalies, thus allowing systems to flag suspicious activity faster than ever.

What’s more, AI-powered biometric verification can dramatically improve identity authentication by comparing live selfies against government-issued IDs with accuracy and speed. When it comes to detecting hyper-realistic deepfakes, machine learning models can also be used to detect subtle signs of digital image manipulation or synthetic image generation not visible to the human eye.

Of course, as the digital landscape continues to evolve, the battle against fraud will become increasingly sophisticated. Businesses must remain agile, continuously adapting their strategies to stay ahead of emerging threats. By embracing advanced AI technologies, implementing robust multi-layered verification processes, and maintaining a proactive approach to security, organisations can create a formidable defense against fraudulent activities.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Related posts

CTG Achieves AWS Service Delivery Designation for Amazon Connect

CIO Influence News Desk

Cognician Partners With NTT DATA to Activate Workplace Cybersecurity Readiness and Risk Management

CIO Influence News Desk

Application Whitelisting by PC Matic Available in Australia to Prevent Ransomware’s Global Spread