CIO Influence
CIO Influence News Cloud Machine Learning

NVIDIA and Hugging Face to Connect Millions of Developers to Generative AI Supercomputing

NVIDIA and Hugging Face to Connect Millions of Developers to Generative AI Supercomputing

NVIDIA and Hugging Face announced a partnership that will put generative AI supercomputing at the fingertips of millions of developers building large language models (LLMs) and other advanced AI applications.

By giving developers access to NVIDIA DGX Cloud AI supercomputing within the Hugging Face platform to train and tune advanced AI models, the combination will help supercharge industry adoption of generative AI using LLMs that are custom-tailored with business data for industry-specific applications, including intelligent chatbots, search and summarization.

CIO INFLUENCE: CIO Influence Interview with Russ Ernst, Chief Technology Officer at Blancco

“Researchers and developers are at the heart of generative AI that is transforming every industry,” said Jensen Huang, founder and CEO of NVIDIA. “Hugging Face and NVIDIA are connecting the world’s largest AI community with NVIDIA’s AI computing platform in the world’s leading clouds. Together, NVIDIA AI computing is just a click away for the Hugging Face community.”

As part of the collaboration, Hugging Face will offer a new service — called Training Cluster as a Service — to simplify the creation of new and custom generative AI models for the enterprise. Powered by NVIDIA DGX Cloud, the service will be available in the coming months.

“People around the world are making new connections and discoveries with generative AI tools, and we’re still only in the early days of this technology shift,” said Clément Delangue, co-founder and CEO of Hugging Face. “Our collaboration will bring NVIDIA’s most advanced AI supercomputing to Hugging Face to enable companies to take their AI destiny into their own hands with open source to help the open-source community easily access the software and speed they need to contribute to what’s coming next.”

Supercharging LLM Customization and Training Within Hugging Face
The Hugging Face platform lets developers build, train and deploy state-of-the-art AI models using open-source resources. Over 15,000 organizations use Hugging Face, and its community has shared over 250,000 models and 50,000 datasets.

The DGX Cloud integration with Hugging Face will bring one-click access to NVIDIA’s multi-node AI supercomputing platform. With DGX Cloud, Hugging Face users will be able to connect to NVIDIA AI supercomputing, providing the software and infrastructure needed to rapidly train and tune foundation models with unique data to drive a new wave of enterprise LLM development. With Training Cluster as a Service, powered by DGX Cloud, companies will be able to leverage their unique data for Hugging Face to create uniquely efficient models in record time.

CIO INFLUENCE: CIO Influence Interview with Bill Lobig, VP of Product Management at IBM Automation

DGX Cloud Speeds Development and Customization for Massive Models
Each instance of DGX Cloud features eight NVIDIA H100 or A100 80GB Tensor Core GPUs for a total of 640GB of GPU memory per node. NVIDIA Networking provides a high-performance, low-latency fabric that ensures workloads can scale across clusters of interconnected systems to meet the performance requirements of advanced AI workloads.

Support from NVIDIA experts is included with DGX Cloud to help customers optimize their models and quickly resolve development challenges.

CIO INFLUENCE: CIO Influence Interview with Lior Yaari, CEO and Co-Founder at Grip Security

[To share your insights with us, please write to sghosh@martechseries.com]

Related posts

EDB Announces Three New Ways to Run Postgres on Google Kubernetes Engine

GlobeNewswire

Quantzig Assisted a Leading Multinational Technology Company with Cutting-Edge Data and Cloud Strategy Roadmap

PR Newswire

70% Of SOC Teams Emotionally Overwhelmed By Security Alert Volume

CIO Influence News Desk