In June 2025 alone, enterprise traffic monitoring observed 1.2 billion requests from OpenAI crawlers across major digital platforms. AI traffic more than tripled in just six months, rising from 2.6% to 8.2% of all verified bot traffic.
This is no anomaly. Over a third of all traffic now originates from non-browser sources, like APIs, SDKs, mobile apps and autonomous AI agents. Digital infrastructure is now subject to a new class of AI-driven web traffic, and legacy security frameworks are not equipped to manage it.
This is the new normal, and there’s no going back. Enterprises must rethink how they monitor and govern automated access before AI agents outpace their ability to protect infrastructure and data.
From bots to agents: A different class of web traffic
Bots have been part of internet traffic for a long time, d***** back to the early search engine crawlers from the 90s, but what we’re seeing now is a sea-change. AI agents are often more persistent, more sophisticated, and much harder to categorise. They regularly ignore robots.txt and bypass traditional access controls. And they can easily disguise their behaviour from that of traditional bots, if they’re programmed to.
Many of these agents aren’t inherently malicious, but that doesn’t mean they’re r********. They can degrade site performance with excessive requests, trigger data leakage by extracting proprietary information from public-facing websites, and most importantly, they operate outside the visibility of most enterprise security systems.
Why binary defences no longer work
Conventional bot traffic management relies on binary logic: allow or block. These decisions are often based on static rules like IP reputation, user-agent signatures, and geographic origin. That approach may have worked for simple crawlers, but it cannot handle dynamic, intelligent agents that adapt in real time.
According to Gartner, 33% of enterprise software applications will include agentic AI by 2028. These AI-driven agents blur the line between human and automated behaviour, with both good and bad intent emerging from automated systems as well as real users. Just because something looks human does not mean it has good intentions, and just because something is automated does not make it a threat.
Security teams are now stuck between a rock and a hard place. If they block all automated traffic, they risk excluding their sites from search indexes and AI content retrieval. But if they allow everything, they expose themselves to abuse.
Fraud prevention must now move beyond static tests and traditional tools. The old method of separating humans from bots is no longer enough. Intent-based detection is critical: bad intent requires better AI to identify and block harmful traffic, stopping cyberfraud before it happens. It’s no longer just about identifying who or what is accessing a site, but understanding why they’re there and what they aim to do.
Introducing intent-based controls
This is where intent-based analysis comes in. Rather than relying on fixed attributes that don’t accurately assess what a bot is doing and why, an intent-based approach assesses behavioural signals and context to evaluate each request. For example, if an AI agent is dynamically adjusting its browsing and purchasing behaviour based on inventory signals and pricing trends, it could be acting as an automated procurement agent. An AI-driven research assistant systematically mapping your site, extracting structured data, and generating tailored outputs for its user may be supporting competitive analysis or content training workflows. Only detailed intent-based analysis can reveal whether these agents represent legitimate use cases or potential abuse.
Intent-based systems can fight AI with AI. They can make real-time decisions on whether to allow, challenge or block traffic, using telemetry from device intelligence, compounded with behavioural patterns and site-specific context. This improves security posture and makes sure that cybersecurity resources aren’t wasted on blocking legitimate, harmless bots.
The most effective solutions include dynamic feedback loops that continuously update detection models. This promotes seamless experiences for legitimate users, including those relying on AI agents, while delivering robust fraud and abuse prevention for benign human and automated traffic.
We need monetisation, not just detection
Detecting AI agents is just the start: the next challenge is turning that traffic into value.
Bot traffic isn’t just a technical problem anymore. It impacts brand visibility and customer experience, which can work either for or against enterprises. Rather than simply blocking or allowing AI agents, businesses now have the opportunity to monetise this traffic, working with partners to create paid access models for data, content, or APIs.
Some organisations may choose to grant controlled access to known LLM crawlers or AI agents for indexing, training, or research, and charge fees for usage or prioritised access. Others may offer tiered data products, where AI-driven traffic can interact with specific datasets under commercial terms. What matters is that this traffic becomes a managed, revenue-generating asset rather than a budget drain.
With the right visibility into crawler and AI agent activity, enterprises can build monetisation strategies that turn previously uncontrolled automation into a source of predictable income, and make sure that AI agents engage with business sites on beneficial terms.
The cost of inaction
The shift will catch up to those who ignore it. AI agents drive up cloud costs by eating up bandwidth, distort analytics by inflating page views and clicks, and will only accelerate in volume. Businesses that remain complacent will, at best, pay more for a reduced quality of service and, at worst, expose themselves to unnecessary risk driven by unchecked bot traffic.
Switching to intent-based detection not only preserves quality and user experience in a world increasingly occupied by AI, but it gets your business ahead of the curve of emerging legislation. The enterprises that act now will decide how AI traffic interacts with their platforms, before the bots do.
Catch more CIO Insights: Data Privacy: A Growing Financial Burden for Businesses
[To share your insights with us, please write to psen@itechseries.com ]

