Despite years of warnings, most industries and organisations are still struggling with the basics of defending themselves against automated threats. DataDome’s recent investigation into nearly 17,000 websites across 22 industries reveals bot protection is getting worse, not better, emphasising that organisations’ current approach to bot traffic is in desperate need of review.
Our report found that only 2.8% of websites worldwide are fully protected against common automated attacks, and that figure is down significantly from last year’s 8.4%. Even more troubling, well over half of all sites have no protection at all. In other words, three out of five companies are leaving their doors unlocked for attackers. This should give pause to every CIO who believes their organisation is prepared.
Also Read: CIO Influence Interview with Jim Dolce, CEO of Lookout
AI bot traffic is aggravating the problem
One of the biggest changes from 2024 to 2025 is the growing volume of AI-driven traffic. Large language model crawlers and agentic AI systems now account for over 10% of all bot traffic, a fourfold increase from the start of the year.
Unlike traditional automation, AI bots can learn and adapt without explicit programming. They are able to self-modify, so can rapidly test and refine their attack patterns across multiple targets. It’s like a game of Whack-A-Mole for security teams – you might block one variant, but another one will inevitably pop up. What used to take human attackers weeks can now be performed in minutes, changing the economics of online fraud completely.
For security teams, the challenge is twofold: fraudsters are using AI to make existing attack methods more efficient, and they are exploiting AI for impersonation. Modern bots can now adapt in real time to evade detection, mirroring human browsing patterns, generating convincing session data, and simulating randomised interactions. Some even imitate the access behaviours of well-known AI crawlers such as OpenAI or Anthropic, blending seamlessly into legitimate traffic. The result is a new generation of bots capable of bypassing rule-based defences and maintaining persistent access to sensitive systems.
What’s more, many of these AI systems can be used both for good and for evil – either behaving responsibly or maliciously depending on who controls them and for what purpose. The same technology that indexes public data for AI training can be repurposed to scrape behind login walls, extract customer information or pricing data, or execute credential-stuffing and promo-abuse campaigns.
This means a binary approach to bot security – simply asking if a user is ‘bot or not’ – is no longer enough. The critical question now is intent. Understanding why a user is engaging with your platform, not simply what it is, is crucial to keeping organisations safe.
Organisation size is no guarantee of safety
While it may seem intuitive that larger companies, with larger IT budgets and access to top cybersecurity talent, would have stronger bot protection than their smaller counterparts, in reality, it’s the opposite. Among the largest organisations we tested (10,001+ employees), just 2.2% of domains were fully protected, with 61% unprotected. The largest domains, those with over 30 million monthly page views, actually had the lowest full protection rate at just 2%.
While smaller companies may be vulnerable due to resource constraints, larger organisations have to contend with the complexity of protecting vast and diverse digital infrastructure. Their scale makes them attractive targets, and these weaker-than-expected defences leave them exposed to both simple and advanced bots.
AI bots have the power to wreak havoc across every industry, but our report reveals government, non-profit, and telecoms are the worst protected against bot attacks worldwide. Government domains are high-value targets for disruption, DDoS attacks, and data theft, especially as they handle citizen data, process transactions for fines and taxes, deliver essential digital services, and provide access to infrastructure. Non-profit organisations manage sensitive donor data and facilitate online giving, yet often lack the budget or staff for robust security. Meanwhile, telecoms domains are frequent targets for account fraud, credential stuffing, and SIM swap attacks – exacerbated by the low level of basic bot protection on these sites.
Key takeaways
CIOs and CISOs must begin to reclaim order over what has become a ‘Wild West’ of bot activity. First, intent must become the organising principle of bot defence. The binary model of ‘bot or human’ no longer works when legitimate users are increasingly represented by automated agents, and malicious automated agents can convincingly mimic human behaviour. CIOs must push their teams and vendors to adopt approaches that look at behaviour in context. The critical question is not who the visitor is, but why they are there.
Secondly, when the majority of domains fail to detect even basic bots, they become prime entry points for more sophisticated attacks. Attackers go where they face the least resistance, and a lax approach to bot protection becomes a signal to fraudsters that your enterprise is ripe for more aggressive, high-ROI exploitation.
For CIOs, the challenge will be to lead a cultural and operational shift away from outdated models of protection. That means demanding intent-based detection, insisting on protection for mission-critical endpoints, and validating every defensive assumption through testing.
There is no question that AI traffic will continue to grow, the question is whether organisations will adapt in time to keep themselves safe.
Catch more CIO Insights: The Password Paradox: Why Human Psychology Makes Us Our Own Worst Enemy
[To share your insights with us, please write to psen@itechseries.com ]

