New report finds that AI agents, integrations and AI-native development platforms are taking hold, raising new and critical security governance challenges
Nudge Security, the leading innovator in SaaS and AI security governance, announced a new research report, AI Adoption in Practice: What Enterprise Usage Data Reveals About Risk and Governance, which provides insights into workforce AI adoption and usage patterns. The report found that AI use has moved beyond experimentation and general-purpose chat tools, and is now embedded into workflows, integrated with core business platforms, and increasingly capable of taking autonomous action.
“AI adoption is no longer experimentalโit’s operational,” said Russell Spitler, CEO and co-founder of Nudge Security. “This shift means AI governance can’t be reactive or policy-only anymore. It requires real-time visibility into what AI tools are in use, how they’re integrated with critical systems, and where sensitive data is flowing. The teams that succeed will be the ones who treat AI governance as a continuous, adaptive process, not a one-time audit.”
Also Read:ย CIO Influence Interview with Gera Dorfman, Chief Product Officer at Orca
Key findings include:
- Usage of core LLM providers is nearly ubiquitous.ย OpenAI is present in 96.0% of organizations, with Anthropic at 77.8%
- The most-used AI tools are diversifying beyond chat.ย Meeting intelligence (Otter.aiย at 74.2%,ย Read.aiย at 62.5%), presentations (Gamma at 52.8%), coding (Cursor at 48.4%), and voice (ElevenLabs at 45.2%) are now widely present.
- Agentic tooling is emerging.ย Agent tools like Manus (22%), Lindy (11%), andย Agent.aiย (8%) are establishing an early footprint.
- Integrations are prevalent and varied.ย OpenAI and Anthropic are most commonly integrated with the organization’s productivity suite, as well as knowledge management systems, code repositories, and other tools.
- Usage is concentrated.ย Among the most active chat tools observed, OpenAI accounts for 67% of prompt volume.
- Data egress via prompts is non-trivial.ย 17% percent of prompts include copy/paste and/or file upload activity.
- Sensitive data risks skew toward secrets.ย Detected sensitive-data events are led by secrets and credentials (47.9%), followed by financial information (36.3%) and health-related data (15.8%).
The research report is based on anonymized and aggregated telemetry across Nudge Security customer environments. Rather than relying on surveys or self-reported usage, this analysis is grounded in direct observation of AI activity within enterprise environments. The percentages referenced reflect the % of organizations where each tool or behavior was observed, unless otherwise noted.
AI governance in practice differs from this reality
AI governance has emerged as a top priority for security and risk leaders, but many programs remain narrowly focused on vendor approvals, acceptable use policies, or model-level risk. While necessary, these controls alone are insufficient. As this research illustrates, the most consequential AI risks now stem from how employees actually use AI tools day to dayโwhat data they share, which systems AI is connected to, and how deeply AI is embedded into other tools and operational workflows. Understanding these intersections between people, permissions, and platforms is the foundation of effective AI governance.
Catch more CIO Insights:ย Identity is the New Perimeter: The Rise of ITDR
[To share your insights with us, please write toย psen@itechseries.com ]


