Survey of 300 fraud leaders shows rising AI threats and privacy regulations creating a dual crisis
Fingerprint, a leader in device intelligence for fraud prevention and returning user experience optimization, released its State of AI Fraud and Privacy Report, revealing a dual crisis facing organizations as AI-powered attacks surge while privacy regulations simultaneously limit traditional identification methods. When asked what percentage of fraud attempts were AI-driven, 41% of fraud and technology leaders surveyed said their organizations are already facing AI-powered attacks.
Financial and Operational Costs of AI Fraud
The financial consequences affect nearly everyone: 99% of organizations report fraud losses from AI-enabled attacks in the past year, with an average of $414,000 per organization. One-third of respondents reported annual losses of up to $1 million.
Beyond direct financial hits, these sophisticated threats—from generative AI phishing schemes to automated bot attacks—are creating a significant operational crisis. According to the report, 93% of fraud teams have seen noticeable operational impacts, with 38% of organizations citing higher costs from manual review and triage as a top business concern. The B2B SaaS industry is particularly vulnerable, with 62% of respondents reporting significant increases in manual processes.
Privacy Regulations Compound Detection Challenges
Privacy-first technologies are compounding fraud detection challenges. Seismic industry shifts like Apple’s Intelligent Tracking Prevention are dismantling the traditional tools fraud teams once relied on for user identification.
More than three-quarters (76%) of respondents report that privacy-focused browsers, VPNs and consumer privacy preferences impact detection capabilities, while 40% say these technologies are significantly reducing identification accuracy.
The report also uncovers a growing gap between industry leaders and laggards: fintechs are more agile than traditional banks. While traditional banks report a higher rate of AI-powered attacks (54%), they are significantly slower to adopt modern defenses. Only 33% of banking respondents are evaluating AI-powered fraud tools, compared to 52% of their fintech counterparts.
“The AI arms race isn’t a future concern; it’s already causing major financial and operational disruption right now,” said Dan Pinto, CEO and co-founder of Fingerprint. “At the same time, privacy regulations are rightfully shifting to give consumers more control. How do you stop sophisticated, automated threats when the old methods of identifying users are becoming obsolete? The answer must be a move toward more advanced, privacy-compliant identification methods.”
Banks, Fintechs and SaaS Among Hardest Hit by AI Fraud
Banks report the highest rate of AI-driven fraud attempts (54%) compared to 47% in fintech. B2B SaaS organizations face a different challenge, with 62% reporting major increases in manual fraud reviews.
Banking institutions are significantly slower to respond, making them prime targets for fraudsters. Banks report the highest attack rate but are significantly slower to adopt the latest fraud prevention technologies. This slow response makes them easier targets for fraudsters who take advantage of outdated systems.
- 54% of banks report AI-driven fraud attempts—the highest among sectors surveyed
- Only 33% are evaluating AI-powered detection tools compared to 52% of fintechs
- Attacks commonly involve account takeovers, synthetic fraud, and credential stuffing
Fintech firms move faster to adopt modern fraud defenses but still face complex AI-driven threats. However, they still deal with complicated AI-driven fraud methods, like fake identities and forged documents, which require ongoing updates to their defenses.
- 47% of fintech companies experience AI-driven fraud attempts
- 32% say privacy technologies severely hinder detection
- Growing tactics include synthetic identity fraud and AI-generated document abuse
B2B SaaS: Digital-first SaaS platforms face heavy targeting due to high user volumes and privacy-conscious customers. SaaS customers typically care a lot about privacy, which makes it harder to spot fraud without disrupting legitimate user experiences. This leads to more automated attacks and more work for fraud teams.
- 62% report significant increases in manual fraud reviews
- 66% express confidence in their current fraud prevention tools but struggle with operational burdens
- Common attacks include credential stuffing, session spoofing, and bot-driven account takeovers
The Path Forward: A Shift to Persistent, Privacy-First Identification
In response to these converging threats, an overwhelming 90% of organizations plan to adopt more persistent, privacy-compliant visitor identification methods within the next 12 months, with nearly half actively planning implementation.
This shift aligns with the broader industry trend toward frictionless security and passwordless authentication. As businesses remove hurdles like passwords and legacy solutions like multi-factor authentication for legitimate users, they require reliable tools, like device intelligence, to accurately identify trusted users without adding disruptive friction.
Catch more CIO Insights: The CIO as AI Ethics Architect: Building Trust In The Algorithmic Enterprise
[To share your insights with us, please write to psen@itechseries.com ]

