CIO Influence
Data Management Guest Authors Machine Learning Security Technology

GenAI Governance Is the New GDPR, and Most CISOs Are Already Behind

GenAI Governance Is the New GDPR, and Most CISOs Are Already Behind

Despite advanced warning, GDPR still caught many businesses off guard. Once in place, organizations hurried to update privacy policies, train employees, and respond to auditors with one hand while patching systems with the other. GenAI presents a similar challenge, but more rapidly and with greater complexity than we’ve seen before. Employees are already using it, regulators are acting swiftly, and customers are demanding proof of safe implementation. Organizations must act now to maximize chances of overcoming it and maintaining compliance.

Why This Isn’t a Far-Off Problem

It’s tempting to view GenAI governance as something that can be delayed until regulations are in place or tools are fully developed. That’s a mistake. Unlike past technology shifts, there hasn’t been a slow rollout or controlled deployment. GenAI adoption is organic, grassroots, and employee-driven, which makes it harder to track and much more difficult to govern retroactively.

The external pressure is also real. Regulators, customers, and partners want proof now that enterprises are protecting sensitive data in GenAI workflows. Waiting until there’s a clear legal requirement is waiting too long.

Regulators are proceeding with exceptional promptness. For example, the EU AI Act is already underway and will shape global standards. In addition, U.S. regulators are signaling their intention to enforce existing laws, such as GDPR, HIPAA, and PCI, on AI workflows. Some states, including California and New York, are already developing their own regulations. Industry frameworks are also emerging, with auditors preparing to incorporate AI into their evaluations.

Why Governance Feels Harder This Time

At first glance, regulating AI doesn’t seem much different from previous challenges. But GenAI governance introduces a whole new wrinkle – unpredictability.

The line between harmless and harmful isn’t always clear. A single prompt can reveal proprietary information, and a single output can carry confidential content into places it doesn’t belong. Traditional controls, such as DLP and permissions audits, weren’t designed for this level of speed or complexity.

For GDPR, the emphasis was on privacy. Being diligent was somewhat tedious, but the tasks were familiar – identify personal data, implement controls, and document your compliance. GenAI, on the other hand, makes things more complicated because it’s less about the data and more about how that data is transformed, remixed, and revealed in unexpected ways. Organizations face risks such as Copilot surfacing sensitive spreadsheets buried in SharePoint, employees pasting proprietary code into ChatGPT to debug a script, and AI-generated outputs that include just enough context to leak confidential details.

Traditional tools weren’t designed for this. DLP rules overlook nuance. Permissions audits can’t keep up with collaboration. And shadow AI means activity occurs outside sanctioned tools completely.

The Trap of Over-Governance

When faced with uncertainty, many organizations tend to exert control over everything. Lock down access, limit tools, and overload employees with policy. On the surface, this seems like a safe choice until you realize employees will find workarounds the moment controls get in the way of their work.

That instinct to clamp down is understandable. No CISO wants to explain to their board why sensitive data ended up in a public model. But over-governance introduces its own risks. When we prioritize control over usability, we simply trade visible risks for unseen ones. Employees won’t stop using AI, they’ll just stop informing you about it. That’s how shadow GenAI becomes the default. You can’t manage what you can’t see, and shadow GenAI is the hardest thing to see.

Over-governance often backfires because it slows productivity, and overly restrictive policies drive employees away from approved tools. In the case of GenAI, users bypass controls with public tools that lack enterprise protections, and shadow AI grows. Trust diminishes between the security team and the business units when they become the team of “no” rather than the team that facilitates safe use. Meanwhile, false confidence sets in as leaders assume risk is managed because policies exist, while data quietly leaks elsewhere.

Effective governance must go beyond simple “yes” or “no” answers. It needs to meet employees where they are, protect what matters most, and adapt as needs evolve. The best approach isn’t to “block everything” or “allow everything.” It’s about building visibility, implementing context-aware controls, and establishing governance that adapts as quickly as the tools do.

What Security Leaders Can Do Now

The governance challenge feels daunting, but it doesn’t have to be overwhelming.

The first step is reframing the problem. GenAI isn’t a new, risky category; it’s a booster. It worsens existing issues – like excessive data permissions, poor classification, and lack of visibility – making them more dangerous. Addressing those can help solve for GenAI.

CISOs should focus less on creating the perfect policy and more on establishing solid foundations. This involves gaining visibility into AI usage, implementing context-aware data classification, and developing adaptive policies that balance protection with productivity.

Also Read: CIO Influence Interview with Gihan Munasinghe, CTO of One Identity

The organizations that succeed won’t be those with the largest set of rulebooks and commandments; they’ll be those with the clearest understanding of what’s happening, combined with protections employees can genuinely live with. Here are important steps to follow:

1. Assess visibility.

Identify where GenAI is in use, including shadow GenAI. Avoid waiting for an incident to find out that finance has been pasting forecasts into ChatGPT.

2. Classify with context.

Go beyond regex or filenames. Semantic classification helps determine if plan_final.docx is harmless or highly sensitive.

3. Tighten permissions.

Least privilege shouldn’t be optional; it’s how you prevent Copilot from exposing sensitive board minutes to an intern. Over-shared folders are a common source of risk.

4. Treat outputs like inputs.

AI-generated text can leak just as easily as prompts. Monitor how generated content is shared downstream.

5. Build adaptive governance.

Policies written today may not work in six months.

What the First 90 Days Typically Look Like

The first three months of enterprise-wide GenAI adoption usually follow a predictable pattern. At first, everything is coming up roses – productivity surges, innovation accelerates, and employees feel empowered. Then the cracks start to show – sensitive data in prompts, compliance questions from leadership, and uncomfortable findings in audit logs. By the 90-day mark, enthusiasm gives way to fire drills.

This cycle is happening across industries right now. Organizations that aren’t prepared for it find themselves reacting after the fact, scrambling to fix permission issues and retrain employees once leaks have already occurred.

The lesson is straightforward – you don’t have to stop the ride, but you do need a seatbelt. If you don’t put governance in place during those first 90 days, you’ll spend the next 90 cleaning up.

A Final Note to CISOs

We’ve learned this lesson before. GDPR showed us that waiting until compliance is required is the most expensive choice. With GenAI, the process moves faster, and the difference between opportunity and risk is measured in months rather than in years.

We have an opportunity to do it differently this time. We can integrate governance into early adoption. We can balance productivity with security. We can show boards, employees, and customers that innovation and security work together, not against each other.

Yes, GenAI governance is the new GDPR. But unlike last time, we can lead from the front. The question is whether we’ll choose to do it.

About Concentric AI

Concentric AI is all about intelligent data security made easy. Its Semantic Intelligence™ platform uses context-aware AI to discover sensitive data, monitor risks, automate remediation, simplify compliance, and accelerate investigations.

Catch more CIO Insights: CIO as Orchestrator of Cross-Functional Digital Strategy

[To share your insights with us, please write to psen@itechseries.com ]

Related posts

Havis Offers Trusted Solution For Panasonic’s New TOUGHBOOK G2

Incode Technologies Achieves FedRAMP Ready Status, Advancing Trusted Identity Verification for U.S. Federal Agencies

Business Wire

Rivery Announces Snowpark Integration To Support A New Developer Experience Within Snowflake

CIO Influence News Desk