CIO Influence
CIO Influence News Computing Data Center and Co-location

Vultr Announces Addition of NVIDIA GH200 Grace Hopper Superchip to AI-Powered Cloud GPU Offerings

Vultr Announces Addition of NVIDIA GH200 Grace Hopper Superchip to AI-Powered Cloud GPU Offerings

Top-of-the-line NVIDIA GPUs to enable advanced, high-performance computing and AI workloads worldwide

Vultr, the world’s largest privately-held cloud computing platform, announced the addition of the NVIDIA GH200 Grace Hopper Superchip to its Cloud GPU offering to accelerate AI training and inference across Vultr’s 32 cloud data center locations.

PREDICTIONS SERIES 2024 - CIO Influence

“With the NVIDIA GH200 in its cloud GPU lineup, Vultr is making some of the industry’s most powerful computing resources available to customers worldwide.”

Following the launch of its first-of-its-kind GPU Stack and Container Registry, Vultr is providing cloud access to the NVIDIA GH200 Grace Hopper Superchip. The NVIDIA GH200 leverages the flexibility of the Arm® architecture to create a CPU server architecture designed from the ground up for the challenges of AI, high-performance computing (HPC), and advanced analytics. The NVIDIA GH200 joins Vultr’s other NVIDIA GPU offerings, which include the HGX H100, A100 Tensor Core, L40S, A40, and A16 GPUs.

CIO INFLUENCE News: Illumio Delivers the Most Complete Zero Trust Segmentation Platform with the Addition of CloudSecure

“The NVIDIA GH200 Grace Hopper Superchip delivers unrivaled performance and TCO for scaling out AI inference. We are excited to be one of the first cloud providers to deliver global access to this essential technology,” said J.J. Kardwell, CEO of Vultr’s parent company, Constant. “With our global rollout of the NVIDIA GH200 Grace Hopper Superchip, AI innovators now have access to the most powerful GPU for AI inference and the ability to access it worldwide to support their local market latency, data sovereignty, compliance, and privacy goals.”

The rapid adoption of large language models requires new levels of computing power to maximize the latest GPU advances and CPU compute resources. The NVIDIA GH200 Grace Hopper Superchip brings the new NVIDIA NVLink®-C2C to connect NVIDIA Grace™ CPUs with NVIDIA Hopper™ GPUs, delivering 7X higher aggregate memory bandwidth to the GPU compared to today’s fastest servers with PCIe Gen 5. This enables the GPU to have direct access to almost 600GB of memory and delivers up to 10X higher performance for applications running terabytes of data.

“The NVIDIA GH200 Grace Hopper Superchip was purpose-built to meet today and tomorrow’s most data-intensive problems head-on,” said Matt McGrigg, director, global business development, NVIDIA cloud partners. “With the NVIDIA GH200 in its cloud GPU lineup, Vultr is making some of the industry’s most powerful computing resources available to customers worldwide.”

CIO INFLUENCE News: CoreView Names Simon Azzopardi as CEO and Acquires Simeon Cloud

Vultr’s mission is to make high-performance cloud computing easy to use, affordable, and locally accessible for businesses and developers worldwide. With access to Vultr’s 32 cloud data center locations, across six continents, AI innovators can now benefit globally from key features of the NVIDIA GH200 Grace Hopper Superchip, including:

  • AI Training and Inference – As AI training models have gotten dramatically larger, so have the resulting AI inference models. The GH200 Grace Hopper Superchip delivers 7X more fast-access memory than traditional accelerated inference solutions and dramatically more FLOPs than CPU inference solutions. For large transformer-based models such as ChatGPT, the NVIDIA GH200 delivers 4X more inference performance compared to the prior-generation NVIDIA A100 Tensor Core GPU and 2X the training performance for NVLink- and NVSwitch-based systems compared to the H100 Tensor Core GPU with x86 and Ethernet.
  • Graph Neural Networks (GNNs) – Graph neural networks apply the predictive power of deep learning to improve drug discovery, computer graphics, genomics, and materials science. Some of the more complex graphs processed by GNNs can have billions of nodes, and the 7X amount of fast-access memory provided by the NVIDIA GH200 can increase training performance by up to 10X compared to the NVIDIA A100 GPU.
  • High-Performance Computing (HPC)  HPC is evolving toward a fusion of both AI and simulation, requiring a tight integration between CPUs and GPUs to deliver the necessary performance. The NVIDIA GH200 with a coherent NVLink-C2C creates a unified memory address space to simplify model programming. It combines high-bandwidth and low-power system memory, LPDDR5X, and HBM3 to take full advantage of NVIDIA GPU acceleration and high-performance Arm cores in a well-balanced system. The NVIDIA GH200 is supported by the NVIDIA HPC software development kit and the full suite of CUDA® and CUDA-X™ libraries, which accelerate over 3,000 GPU applications.

NVIDIA GPUs are also integrated with Vultr’s broad array of virtualized cloud compute and bare metal offerings, as well as managed Kubernetes, managed databases, block and object storage, and more. This seamless product and services array makes Vultr the preferred all-in-one cloud provider for businesses of all sizes with critical AI and machine learning initiatives.

Today, Vultr also announced that it has reached NVIDIA Elite status in the NVIDIA Partner Network for Compute Competency. The Compute Competency focuses on partners who provide NVIDIA GPU-accelerated computing platforms for enterprise IT, integrated across hardware and software. This robust, secure infrastructure supports all modern workloads from the data center to the edge, while driving scientific breakthroughs and game-changing innovations. Compute partners bring unprecedented performance, scalability, and security to every data center.

CIO INFLUENCE News: Dell Technologies and Hugging Face to Simplify Generative AI with On-Premises IT

[To share your insights with us, please write to sghosh@martechseries.com]

Related posts

New Fortinet Firewall Increases Security and Networking Convergence Across Hybrid IT to Enable Secure

CIO Influence News Desk

LG Receives Ahri Performance Award For Fifth Consecutive Year

DoiT International Achieves AWS DevOps Competency

Business Wire