CIO Influence
CIO Influence Interviews Machine Learning Natural Language Security

CIO Influence Interview with Pravin Kothari, Co-founder & CEO, AppSOC

CIO Influence Interview with Pravin Kothari, Co-founder & CEO, AppSOC

In this Q&A Pravin Kothari, Co-founder & CEO of AppSOC, discusses security posture management, the explosion of AI applications and the software supply chain as a key security pillar:

——————-

Hi Pravin – can you tell us about your background and how that has led you to tackling security and governance for AI systems.

I have been a serial entrepreneur and built several companies over the past 20+ years. I was a co-founder of ArcSight – the pioneer in the SIEM space managing security events, which was acquired by HP for $1.6 billion dollars. I founded Risk Vision, which focused on risk management and IT-GRC, and was acquired by Agiliance. I then founded CipherCloud which was the first mover in the CASB space and was acquired by Lookout.

My current company, AppSOC combines many elements from my background in security, risk management, compliance, and data and application protection. AppSOC initial focus was on application security posture management, and risk-based vulnerability management. Our platform is very powerful at consolidating security findings from hundreds of tools, prioritizing them based on business context, and automating remediation workflows.

Also Read: Infogain Appoints Mohit Bhat as Chief Delivery & Innovation Officer

This leads to our latest launch, which is a natural extension of AppSOC’s capabilities: protecting the exploding world of AI systems, large language models, datasets, prompts, and all the applications they connect to. We are the first application security vendor to tackle AI security.

Is the explosion of AI applications outpacing security? What guardrails do companies need to keep it safe?

While it’s still early days, a lot of companies are jumping into experimenting and deploying AI capabilities for their business applications. These projects are often run by data scientists, outside of traditional security teams. But security is rapidly emerging as one of the top concerns and potential blocker of AI innovation.

We definitely need security guardrails for AI, as well as compliance guardrails, and data governance guardrails. The field has been evolving so quickly, that security teams are trying to catch up and regain control. There are also good frameworks being developed, such as the OWASP Top 10 for LLMs, and MITRE ATLAS, which defines risks similarly to their ATT&CK framework.

Protecting the software supply chain has become a key security pillar, but what do we know about AI supply chains?

We know from Solarwinds, Log4j and many other incidents that software supply chain security is critical. Now, AI systems bring in many new types of supply chain challenges. A great example is Hugging Face – a hugely popular site for AI scientists and developers. They offer about 800,000 open-source LLMs for download, modifying, and sharing with the community. That’s great for experimenting and development, but of course, it has already attracted bad actors and malware.

The new AI supply chain includes these models, datasets – both training data and production data, as well as the code and APIs that connect these systems to other applications. If your enterprise is building AI applications, you need to know who is publishing those models, what data were they trained with. You also need to know where all these components came from and who could have accessed them, and then track and remediate any vulnerabilities that may have emerged along the way.

What about security posture management – does that apply to AI as well?

Yes – absolutely.  Posture management is about knowing the risk posture of your existing AI stack. It all begins with MLOps pipelines that the data science teams are building. Do you have good visibility into these platforms? How are they training and fine tuning the models? Has everything been adequately configured and hardened? Unfortunately, there are not yet security benchmarks for any of the MLOps environments. This is where security posture management comes into the picture – ensuring that the platforms are secure and properly configured when these tools are deployed and in use.

What do you recommend for CISOs or security managers trying to get the bigger picture on AI projects?

First, the focus should be on enabling AI securely, not trying to stop it. You’re not going to be able to shut it down, and organizations need to take AI seriously to remain competitive. But my recommendation is to create AI policies, implement guardrails on AI usage, including monitoring the data that the employees are sending as calls to third party hosted models. And if you’re building AI applications, then you must treat the AI systems as software systems and follow all the best practices for application security. You also need to correlate security findings coming from the code base and link them to security issues coming from the AI side.

Also Read: ZeroTier Raises $13.5 Million in Series A; Appoints Andrew Gault as CEO

How does AppSOC help with this?

AppSOC is an application security platform that aggregates findings coming from hundreds of security point solutions. This could be your application security scanners integrated into the CI/CD platform, or your cloud security posture management tools, or the endpoint scanners deployed on your infrastructure apps. First it aggregates all those findings, deduplicates them, correlates, and prioritizes issues so that the development teams get a short, prioritized, and actionable list of security findings and can address the most important ones first.

On the AI side, AppSOC also has built tools that start with “Shadow AI” discovery, scanning the models that are deployed in the MLOps platforms, and detecting misconfigurations. Beyond that we are also creating AI bills-of-materials, and a knowledge base with risk ratings of LLMs from sources like Hugging Face. We’re also detecting content anomalies and protecting sensitive data against leaks. Our focus is on both governance and protection.

Are the new AI capabilities part of the same platform to track and fix these issues?

Absolutely. AppSOC can manage security risk for the entire application stack with a single platform. At the end of the day, AI systems are still software applications, so it’s important to combine the security findings coming from Gen AI applications and core business applications so that the customers get a full view, end-to-end view of their risk and security posture.

Also Read: DapuStor Extends Collaboration with Marvell to Unveil Cutting-Edge Flexible Data Placement (FDP) Technology for QLC SSDs

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Pravin Kothari is a distinguished entrepreneur and cybersecurity expert with over two decades of experience in the technology industry. He is the founder and CEO of CipherCloud, a pioneer in cloud security and data protection solutions. Pravin’s innovative approach has helped transform the way organizations secure their sensitive data in the cloud.

AppSOC is a leader in Application Security Posture Management (ASPM) and Code-to-Cloud Vulnerability Management. Our mission is to break through security silos, consolidate data across hundreds of tools, prioritize findings based on real business risk, and reduce the friction between DevSecOps teams to make security more precise and cost-effective.

Related posts

Extreme Networks Expands CoPilot AI-Driven Capabilities to Empower Taxed Network Administrators

CIO Influence News Desk

Resecurity showcases Dark Web Monitoring and Threat Intelligence solutions

CIO Influence News Desk

Futurex Partners with Google to Deliver Google Workspace Client-Side Encryption

CIO Influence News Desk