CIO Influence
CIO Influence News Machine Learning Security

Shadow AI Threat Grows Inside Enterprises as BlackFog Research Finds 60% of Employees Would Take Risks to Meet Deadlines

Shadow AI Threat Grows Inside Enterprises as BlackFog Research Finds 60% of Employees Would Take Risks to Meet Deadlines

BlackFog Logo

Productivity pressure is pushing senior leaders to bypass AI safeguards, increasing exposure to data leakage

BlackFog, the leader in AI security and anti data exfiltration (ADX) technology, released new research highlighting the growing risks of “Shadow AI” in the workplace, as employees increasingly turn to unapproved AI tools to meet deadlines and boost productivity.

Also Read: CIO Influence Interview with Gera Dorfman, Chief Product Officer at Orca

The study, based on a survey of 2,000 respondents1, found that 86% now use AI tools at least weekly for work-related tasks. However, more than one-third (34%) admit to using free versions of company-approved AI tools, raising concerns about where sensitive corporate data is stored, processed, and accessed.

Among respondents using AI tools not approved by their employer2, 58% rely on free versions, which often lack enterprise-grade security, data governance, and privacy protections.

The findings suggest a general acceptance of risk among employees, with 63% of respondents believing it is acceptable to use AI tools without IT oversight if no company-approved option is provided. The ‘speed outweighs security’ mindset is reinforced by the fact that 60% of respondents agree that using unsanctioned AI tools is worth the security risks if it helps them work faster or meet deadlines. Additionally, 21% believe their employer would “turn a blind eye” to the use of unapproved AI tools as long as work is completed on time.

Additional Key Findings Include:

  • Senior level leaders are more likely to accept risks: 69% of respondents at President or C-level and 66% of those at Director or Senior VP level believe speed trumps privacy or security. In contrast, just 37% in administrative roles and 38% in junior executive positions share this view.
  • Sensitive corporate data is being shared on unsanctioned AI tools: One-third (33%) of employees have shared research or data sets, more than a quarter (27%) have shared employee data such as staff names, payroll, or performance information, and 23% have shared financial statements or sales data.
  • Third-party integrations heighten risk: Around half (51%) of employees admit to connecting or integrating AI tools with other work systems or apps without IT department approval or oversight.

Commenting on the findings, Dr. Darren Williams, CEO and Founder of BlackFog, said, “This research is a stark indication not only of how widely unapproved AI tools are being used, but also the level of risk tolerance amongst employees and senior leaders. This should raise red flags for security teams and highlights the need for greater oversight and visibility into these security blind spots. AI is already embedded in our working world, but this cannot come at the expense of the security and privacy of the datasets on which these AI models are trained.”

Catch more CIO Insights: Identity is the New Perimeter: The Rise of ITDR

[To share your insights with us, please write to psen@itechseries.com ]

Related posts

Infobip Partners with Community to Launch Global Mobile Messaging Platform

Business Wire

Adversary-Sponsored Research Contests on Cybercriminal Forums Focus on New Methods of Attack and Evasion, Sophos Research Reveals

CIO Influence News Desk

OPPO Reno7 Pro 5G Shaping The Industry with Pioneering Technological Advancements