CIO Influence
CIO Influence News Cloud Machine Learning Security

New Skyhigh Security Research Finds Less Than 10% of Enterprises Have Implemented Data Protection Policies, Controls for AI Apps

New Skyhigh Security Research Finds Less Than 10% of Enterprises Have Implemented Data Protection Policies, Controls for AI Apps

Skyhigh Security Logo

Skyhigh Security, a global leader in the Security Service Edge (SSE) and data security markets, has released its 2025 Cloud Adoption and Risk Report, offering a blueprint for securing the modern AI-powered enterprise backed by real-world insights, trends, and best practices from across the globe. The findings reveal that 94% of all AI services are at risk for at least one of the top LLM risk vectors—prompt injection/jailbreak, malware generation, toxicity, and bias—and 11% of files uploaded to AI applications include sensitive corporate content.

Read More on CIO Influence: AI-Augmented Risk Scoring in Shared Data Ecosystems

“Our research clearly shows that threats like Shadow AI and the unsanctioned use of generative AI applications are rising just as swiftly as AI adoption itself. If your organization hasn’t evaluated its security posture in this new era of AI and cloud, these statistics should serve as a critical reminder,” said Steve Tait, Chief Technology Officer at Skyhigh Security. “Both unsanctioned and sanctioned AI usage isn’t just a compliance risk, it also opens the door to the exfiltration of sensitive data. At this point, security and governance aren’t optional—they’re foundational.”

Shining a light on Shadow AI and unsanctioned app usage

The Shadow AI problem is an extension of the Shadow IT problem that enterprises have dealt with for the last decade. Skyhigh Security data finds that enterprises use a staggering 320 AI cloud applications on average – with DeepSeek emerging as a key driver of Shadow AI growth. In January 2025, Skyhigh Security recorded DeepSeek usage by 43% of customers, who uploaded a combined 176GB of data into the AI chatbot.

Traditional DLP and access control models are no longer suited to address the nuances of Shadow AI, prompt-based data exposure, and AI learning risks on their own. Security Service Edge (SSE) solutions allow enterprises to gain full visibility into all AI applications, along with usage metrics such as user counts, upload data, and request count. In addition, SSE solutions provide risk information calculated using a set of controls.

Microsoft Copilot, ChatGPT adoption continues to surge for global enterprises

It comes as no surprise that AI adoption is skyrocketing, with Skyhigh Security research revealing a 200% increase in AI application traffic within the last year, compared to a 23% increase in traffic to non-AI applications. Furthermore, data uploaded to AI applications is up 80% while other categories registered just 13% growth.

Copilot for Microsoft 365 and OpenAI’s ChatGPT lead as the top AI applications used by enterprises. While both are wildly popular, Microsoft Copilot dominates with 82% of all Skyhigh Security customers using Microsoft Copilot within their enterprise—up from 18% last year. Within the same timeframe, the traffic to Microsoft Copilot increased 3,600x, with data uploads increasing 6,000x.

As Microsoft Copilot adoption accelerates across the enterprise, organizations are prioritizing the extension of existing security controls to protect sensitive data within Copilot environments. This includes the application of Data Loss Prevention (DLP), Data-at-Rest Scanning, and the Prevention of Sensitive Data Ingestion.

Catch more CIO InsightsGhost Security Releases Groundbreaking Research: AI-Driven Analysis Exposes Flaws in Static Application Security Testing

Growing AI usage demands stronger compliance oversight

As organizations integrate AI solutions across departments and global operations, adhering to region-specific and industry-mandated compliance frameworks has become essential. The top regulations that have expanded their reach to include AI applications include the General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA), and EU AI Act.

Skyhigh Security’s analysis finds that 95% of AI applications are at medium or high risk for EU GDPR violation, and only 22% of all AI applications are in adherence to one or more compliance certifications such as HIPAA, PCI, ISO, FISMA, and FedRAMP. In particular, the report reveals that 84% of AI applications don’t support ‘Data Encryption at Rest,’ while 83% don’t support integration with multi-factor authentication (MFA) tools.

[To share your insights with us, please write to psen@itechseries.com ]

Related posts

LiquidX Launches Digital Distribution of Trade Finance Assets

CIO Influence News Desk

Ardalyst Completes Three-Year Strategic Plan – Looks to the Future of Cybersecurity

CIO Influence News Desk

Percipience’s Data Magnifier Now Available on Microsoft Azure Marketplace

GlobeNewswire