CIO Influence
CIO Influence News Security

Netskope Threat Labs: Source Code Most Common Sensitive Data Shared to ChatGPT

Netskope Threat Labs: Source Code Most Common Sensitive Data Shared to ChatGPT

Netskope, a leader in Secure Access Service Edge (SASE), unveiled new research showing that for every 10,000 enterprise users, an enterprise organization is experiencing approximately 183 incidents of sensitive data being posted to the app per month. Source code accounts for the largest share of sensitive data being exposed.

The findings are part of Cloud & Threat Report: AI Apps in the Enterprise, Netskope Threat Labs’ first comprehensive analysis of AI usage in the enterprise and the security risks at play. Based on data from millions of enterprise users globally, Netskope found that generative AI app usage is growing rapidly, up 22.5% over the past two months, amplifying the chances of users exposing sensitive data.

Read More CIO INFLUENCE News: CIO Influence Interview with Manish Goyal, Senior Partner, Global AI and Analytics Leader at IBM Consulting

Growing AI App Usage
Netskope found that organizations with 10,000 users or more use an average of 5 AI apps daily, with ChatGPT seeing more than 8 times as many daily active users as any other generative AI app. At the current growth rate, the number of users accessing AI apps is expected to double within the next seven months.

Over the past two months, the fastest growing AI app was Google Bard, currently adding users at a rate of 7.1% per week, compared to 1.6% for ChatGPT. At current rates, Google Bard is not poised to catch up to ChatGPT for over a year, though the generative AI app space is expected to evolve significantly before then, with many more apps in development.

Users Inputting Sensitive Data into ChatGPT
Netskope found that source code is posted to ChatGPT more than any other type of sensitive data, at a rate of 158 incidents per 10,000 users per month. Other sensitive data being shared in ChatGPT includes regulated data- including financial and healthcare data, personally identifiable information – along with intellectual property excluding source code, and, most concerningly, passwords and keys, usually embedded in source code.

Browse More CIO INFLUENCE News: CIO Influence Interview with Archie Agarwal, Founder and CEO at ThreatModeler

“It is inevitable that some users will upload proprietary source code or text containing sensitive data to AI tools that promise to help with programming or writing,” said Ray Canzanese, Threat Research Director, Netskope Threat Labs. “Therefore, it is imperative for organizations to place controls around AI to prevent sensitive data leaks. Controls that empower users to reap the benefits of AI, streamlining operations and improving efficiency, while mitigating the risks are the ultimate goal. The most effective controls that we see are a combination of DLP and interactive user coaching.”

Blocking or Granting Access to ChatGPT
Netskope Threat Labs is currently tracking ChatGPT proxies and more than 1,000 malicious URLs and domains from opportunistic attackers seeking to capitalize on the AI hype, including multiple phishing campaigns, malware distribution campaigns, and spam and fraud websites.

Blocking access to AI related content and AI applications is a short term solution to mitigate risk, but comes at the expense of the potential benefits AI apps offer to supplement corporate innovation and employee productivity. Netskope’s data shows that in financial services and healthcare – both highly regulated industries – nearly 1 in 5 organizations have implemented a blanket ban on employee use of ChatGPT, while in the technology sector, only 1 in 20 organizations have done likewise.

“As security leaders, we cannot simply decide to ban applications without impacting on user experience and productivity,” said James Robinson, Deputy Chief Information Security Officer at Netskope. “Organizations should focus on evolving their workforce awareness and data policies to meet the needs of employees using AI products productively. There is a good path to safe enablement of generative AI with the right tools and the right mindset.”

In order for organizations to enable the safe adoption of AI apps, they must center their approach on identifying permissible apps and implementing controls that empower users to use them to their fullest potential, while safeguarding the organization from risks. Such an approach should include domain filtering, URL filtering, and content inspection to protect against attacks. Other steps to safeguard data and securely use AI tools include:

  • Block access to apps that do not serve any legitimate business purpose or that pose a disproportionate risk to the organization.
  • Employ user coaching to remind users of company policy surrounding the use of AI apps.
  • Use modern data loss prevention (DLP) technologies to detect posts containing potentially sensitive information.

Read the full Cloud & Threat Report: AI Apps in the Enterprise here. For more information on cloud-enabled threats and the latest findings from Netskope Threat Labs, visit Netskope’s Threat Research Hub. To receive Netskope Threat Labs blog posts, subscribe here.

In conjunction with the report, Netskope announced new solution offerings from SkopeAI, the Netskope suite of artificial intelligence and machine learning (AI/ML) innovations. SkopeAI leverages the power of AI/ML to conquer the limitations of complex legacy tools and provide protection using AI-speed techniques not found in other SASE products. Learn more about SkopeAI here.

Latest CIO INFLUENCE News: CIO Influence Interview with Jim Kwiatkowski, CEO at LTX, a Broadridge Company

[To share your insights with us, please write to sghosh@martechseries.com]

Related posts

Cincoze Latest Sunlight Readable Display Modules The Cornerstone of IoT’s Evolution

Docusign Unveils “Docusign for Developers” to Accelerate Intelligent Agreement Management at Docusign Discover Event

PR Newswire

SecureAuth Announces General Availability of Arculix, Its Next-Gen Passwordless Continuous Authentication Platform

CIO Influence News Desk