CIO Influence
CIO Influence News IT services

JFrog and Qwak Create Secure MLOps Workflows for Accelerating the Delivery of AI Apps at Scale

JFrog and Qwak Create Secure MLOps Workflows for Accelerating the Delivery of AI Apps at Scale

New native integration empowers organizations to deliver ML applications efficiently with end-to-end software supply chain visibility, governance, and security

JFrog Ltd. the Liquid Software company and creators of the JFrog Software Supply Chain Platform, announced a new technology integration with Qwak, a fully managed ML Platform, that brings machine learning models alongside traditional software development processes to streamline, accelerate, and scale the secure delivery of ML applications.

Recommended: Top Cybersecurity Trends in Healthcare for CIOs in 2024

“Currently, data scientists and ML engineers are using a myriad of disparate tools, which are mostly disconnected from standard DevOps processes within the organization, to mature models to release. This slows MLOps processes down, compromises security, and increases the cost of building AI powered applications, ”

“Currently, data scientists and ML engineers are using a myriad of disparate tools, which are mostly disconnected from standard DevOps processes within the organization, to mature models to release. This slows MLOps processes down, compromises security, and increases the cost of building AI powered applications, ” said Gal Marder, Executive Vice President of Strategy, JFrog. “The combination of the JFrog Platform – with Artifactory and Xray at its core – plus Qwak provides users with a complete MLSecOps solution that brings ML models in line with other software development processes, creating a single source of truth for all software components across Engineering, MLOps, DevOps and DevSecOps teams so they can build and release AI applications faster, with minimal risk and less cost.”

Uniting JFrog Artifactory and Xray with Qwak’s ML Platform brings ML apps alongside all other software development components in a modern DevSecOps and MLOps workflow, enabling data scientists, ML engineers, Developers, Security, and DevOps teams to easily build ML apps quickly, securely, and in compliance with all regulatory guidelines. The native Artifactory integration connects JFrog’s universal ML Model registry with a centralized MLOps platform so users can easily build, train, and deploy models with greater visibility, governance, versioning, and security. Using a centralized platform for ML model deployment also allows users to focus less on infrastructure and more on their core data science tasks.

IDC research indicates that while AI/ML adoption is on the rise, the cost of implementing and training models, shortage of trained talent, and absence of solidified software development life-cycle processes for AI/ML are among the top three inhibitors to realizing the full benefits of AI/ML at scale.

Recommended: Top 5 Application Security Trends for CIOs in Finance and Banking

“Building ML pipelines can be complicated, time-consuming, and costly to organizations looking to scale their MLOps capabilities. These homegrown solutions are not equipped to manage and protect the process of building, training, and tuning ML models at scale with little to no audibility,” said Jim Mercer, Program Vice President Software Development, DevOps, and DevSecOps. “Having a single system of record that can help automate the development, providing a documented chain of provenance, and security of ML models alongside all other software components offers a compelling alternative for optimizing the ML process while injecting more model security and compliance.”

Without the right infrastructure, platform and processes needed for ML operations (MLOps), it’s challenging to build, manage, and scale complex ML infrastructure, deploy models quickly, and secure them without incurring excessive costs. Companies often struggle to manage infrastructure complexity causing expensive and time-consuming authentication and security protocols between various development environments.

“AI and ML have recently transformed from being a distant future prospect to a ubiquitous reality. Building ML models is a complex and time-intensive process, which is why many data scientists are still struggling to turn their ideas into production-ready models,” said Alon Lev, CEO, Qwak. “While there are plenty of open source tools on the market, putting all of those together to build a comprehensive ML pipeline isn’t easy, which is why we’re thrilled to work with JFrog on a solution for automating ML artifacts and releases in the same, secure way customers manage their software supply chain with JFrog Artifactory and Xray.”

Proof of why having secure, end-to-end MLOps processes is imperative was further confirmed by the JFrog Security Research team in their discovery of malicious ML Models in Hugging Face, a widely used AI model repository. Their research found that several malicious ML Models housed in Hugging Face posed the threat of code execution by threat actors, which could lead to data breaches, system compromise, or other malicious actions.

Recommended: CIO Influence Interview with Eita Yanagisawa, Senior General Manager of System Solution Business Division, Sony Semiconductor Solutions

[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]

Related posts

CUBE and LogicGate Partner for Comprehensive Regulatory Compliance Solution

CIO Influence News Desk

Airbyte Offers Industry-First Open-Source Data Integration Platform To Data Lakes

CIO Influence News Desk

Top Colombian Law Firm Gómez-Pinzón Goes Live On iManage Closing Folders

CIO Influence News Desk