For most organizations, this will be the year that AI fully transitions from a workplace novelty into core, indispensable infrastructure. We have entered an era where AI agents execute tasks autonomouslyโmodifying records, creating accounts, and pushing code through API calls at speeds that complete before any human can review them.
So far, so good.ย But the security stacks defending most organizations today were designed for a different reality: a world where humans are the only actors, processes are deterministic, data remains in recognizable formats, and trust is verified at the browser. That world no longer exists.
New research reveals this mismatch has created an alarming execution gap. Here are the 6 most critical findings.
1. Fast adoption, failing foundations
Twelve months ago, many organizations viewed AI governance as a future priorityโsomething to finalize once adoption stabilized. But adoption didn’t wait. Today, AI tools are deployed at 73% of surveyed organizations, while only a tiny 7% have reached advanced governance maturity capable of enforcing security and policy in real time.
This creates a massive 66-point structural deficit. Organizations are building their AI strategies at production speed on a security foundation that barely exists. The consequences are already evident: 39% of organizations have experienced an AI-related “near-miss” involving the unintended exposure or leakage of sensitive data.
A primary driver of this risk is shadow AI. Over a third of organizations report fragmented AI adoption, with multiple teams deploying tools independently under no shared framework. Nearly half (48%) of respondents predict that governance failuresโspecifically shadow AI and over-permissive accessโwill trigger the next major AI-related breach.
2. The paradox of AI security spending
In response to the obvious risks of AI, organizations are throwing money at the problem. A staggering 90% of organizations increased their AI security budgets this year, with nearly a third raising budgets by more than 25%.
But paradoxically, 29% of professionals feel less secure today than they did twelve months ago.
Why is confidence slipping while spending rises? The barrier isnโt a lack of funds; itโs that the architecture being funded reflects a pre-AI threat model. Existing security tools were designed for known file formats, predictable data flows, and human-speed interactions. Adding more budget to legacy stacks simply buys more of what already fails against AI-driven risk. Respondents cited business pressure to adopt AI faster than security can follow (34%), skill gaps (25%), and legacy tools that cannot interpret AI-specific threats (21%) as their biggest barriers.
3. The illusion of visibility
You cannot secure what you cannot see, and the research is clear: most AI activity is invisible to security teams.
An overwhelming 94% of respondents report gaps in AI activity visibility, and only 6% claim to see the full scope of their organizationโs AI pipeline. The most glaring technical blind spot is instance identification: 88% of organizations cannot reliably distinguish between a corporate enterprise AI account and a personal AI account running on the same platform. When security teams cannot tell if an employee is using an authorized, governed AI tenant or a personal account lacking data protections, all downstream DLP policies and audit trails become effectively useless.
4. AI renders legacy DLP powerless
If your organization relies on legacy Data Loss Prevention (DLP), your data is likely exposed in the AI era. The distinction is architectural: traditional DLP operates at the syntactic layer, matching character sequences and regex patterns (like credit card formats) against predefined rules. AI, however, operates at the semantic layer, meaning it transforms content while preserving its underlying intent.
Organizational security teams should try taking the “Transformation Test”: If an employee takes a secret project description and asks an AI to summarize it, the AI might replace specific sensitive keywords with generic corporate jargon. To a traditional regex filter, the new output looks perfectly safe, even though the semantic valueโthe secret itselfโremains identical. The security team should explore whether their DLP is able to identify and stop a data breach in this scenario. Of the companies who participated in the research, only 8% have controls that evaluate content semantically. When faced with AI-transformed content, 92% of organizations lack DLP confirmed to work, meaning AI rewriting easily bypasses their pattern matching.
Also Read:ย CIO Influence Interview with Gihan Munasinghe, CTO of One Identity
5. The unsupervised autonomous agent
Perhaps the most alarming trend discovered by the researchers is the rise of the unsupervised agent. AI-driven risk is actively expanding from human misuse to machine autonomy.
56% of organizations report real agentic AI risk exposure. These agents are being granted incredible power. Survey respondents admit that their AI agents have write access to cloud collaboration tools (53%), email (40%), code repositories (25%), and even identity providers (8%). An agent with write access to an identity layer can autonomously create service accounts, elevate privileges, and grant external access.
Yet, security controls are completely blind to these non-human identities (NHIs). While 62% of organizations attempt to build AI security on zero trust principles, 65% admit their current zero trust controls cannot secure NHIs. Furthermore, emerging machine-to-machine protocols like the Model Context Protocol (MCP) are largely ignored, with only 8% of organizations having policies governing MCP traffic.
Because of this gap, 91% of organizations cannot reliably stop a risky AI-driven action before it executes. For every ten organizations running agentic AI, fewer than one can stop an agent from deleting a repository or modifying a customer record before the damage is done. Real-world operational fallout is already happening, with 37% reporting AI-caused operational issues in the past year, echoing high-profile exploits like the Reprompt attack and the EchoLeak vulnerability.
6. Most AI security runs on trust
When an AI tool violates a policy, what actually stops it? For most, the answer is nothing.
The largest single enforcement category in the survey is the honor system: 31% of organizations rely purely on written policies and employee compliance. Another 20% rely on post-event API scanning, and 11% have no policies at all. Only 23% enforce AI security inline at the point of action. Even then, 42% rely on blunt “block-or-allow” controls for entire applications, rather than granular activity controls.
Closing the execution gap
To close the execution gap, organizations must focus on four architectural priorities:
1. Close visibility gaps:
Expand activity-level monitoring across SaaS, API, and M2M traffic. The foundational step is distinguishing personal from corporate AI accounts.
2. Deploy semantic data protection:
Transition from legacy pattern-matching DLP to content-aware, semantic inspection that evaluates meaning at the point of transfer.
3. Extend zero trust to non-human identities:
Bring the protocol layer and identity layer together so that AI agents, API keys, and MCP communications are evaluated with the same rigor as human users.
4. Enforce inline before execution:
Move away from post-event logging. Build containment playbooks and establish approval gates for agents to intercept risky actions at the request layer before they complete.
The maturity dimensions are mapped, and the gaps are clear. The tools and strategies exist to secure this new infrastructureโwhat remains is for organizations to make the decision to build it.
Catch more CIO Insights:ย CIO as Orchestrator of Cross-Functional Digital Strategy
[To share your insights with us, please write toย psen@itechseries.comย ]

