CIO Influence
CIO Influence News Computing Datacentre

NVIDIA MGX Gives System Makers Modular Architecture to Meet Diverse Accelerated Computing Needs of World’s Data Centers

NVIDIA MGX Gives System Makers Modular Architecture to Meet Diverse Accelerated Computing Needs of World’s Data Centers

QCT and Supermicro Among First to Use Server Spec Enabling 100+ System Configurations to Accelerate AI, HPC, Omniverse Workloads

To meet the diverse accelerated computing needs of the world’s data centers, NVIDIA unveiled the NVIDIA MGX server specification, which provides system manufacturers with a modular reference architecture to quickly and cost-effectively build more than 100 server variations to suit a wide range of AI, high performance computing and Omniverse applications.

ASRock Rack, ASUS, GIGABYTE, Pegatron, QCT and Supermicro will adopt MGX, which can slash development costs by up to three-quarters and reduce development time by two-thirds to just six months.

“Enterprises are seeking more accelerated computing options when architecting data centers that meet their specific business and application needs,” said Kaustubh Sanghani, vice president of GPU products at NVIDIA. “We created MGX to help organizations bootstrap enterprise AI, while saving them significant amounts of time and money.”

With MGX, manufacturers start with a basic system architecture optimized for accelerated computing for their server chassis, and then select their GPU, DPU and CPU. Design variations can address unique workloads, such as HPC, data science, large language models, edge computing, graphics and video, enterprise AI, and design and simulation. Multiple tasks like AI training and 5G can be handled on a single machine, while upgrades to future hardware generations can be frictionless. MGX can also be easily integrated into cloud and enterprise data centers.

CIO INFLUENCE: HP Chooses RISE with SAP to Help Drive Digital Transformation, Optimization and Efficiency

Collaboration With Industry Leaders

QCT and Supermicro will be the first to market, with MGX designs appearing in August. Supermicro’s ARS-221GL-NR system, announced today, will include the NVIDIA Grace™ CPU Superchip, while QCT’s S74G-2U system, also announced today, will use the NVIDIA GH200 Grace Hopper Superchip.

Additionally, SoftBank Corp. plans to roll out multiple hyperscale data centers across Japan and use MGX to dynamically allocate GPU resources between generative AI and 5G applications.

“As generative AI permeates across business and consumer lifestyles, building the right infrastructure for the right cost is one of network operators’ greatest challenges,” said Junichi Miyakawa, president and CEO at SoftBank Corp. “We expect that NVIDIA MGX can tackle such challenges and allow for multi-use AI, 5G and more depending on real-time workload requirements.”

Different Designs for Different Needs

Data centers increasingly need to meet requirements for both growing compute capabilities and decreasing carbon emissions to combat climate change, while also keeping costs down.

Accelerated computing servers from NVIDIA have long provided exceptional computing performance and energy efficiency. Now, the modular design of MGX gives system manufacturers the ability to more effectively meet each customer’s unique budget, power delivery, thermal design and mechanical requirements.

CIO INFLUENCE: Organizations are Advancing their Digital Strategies with AI

Multiple Form Factors Offer Maximum Flexibility

MGX works with different form factors and is compatible with current and future generations of NVIDIA hardware, including:

  • Chassis: 1U, 2U, 4U (air or liquid cooled)
  • GPUs: Full NVIDIA GPU portfolio including the latest H100, L40, L4
  • CPUs: NVIDIA Grace CPU Superchip, GH200 Grace Hopper Superchip, x86 CPUs
  • Networking: NVIDIA BlueField-3 DPU, ConnectX-7 network adapters

MGX differs from NVIDIA HGX in that it offers flexible, multi-generational compatibility with NVIDIA products to ensure that system builders can reuse existing designs and easily adopt next-generation products without expensive redesigns. In contrast, HGX is based on an NVLink®-connected, multi-GPU baseboard tailored to scale to create the ultimate in AI and HPC systems.

Software to Drive Acceleration Further

In addition to hardware, MGX is supported by NVIDIA’s full software stack, which enables developers and enterprises to build and accelerate AI, HPC and other applications. This includes NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform, which features over 100 frameworks, pretrained models and development tools to accelerate AI and data science for fully supported enterprise AI development and deployment.

MGX is compatible with the Open Compute Project and Electronic Industries Alliance server racks, for quick integration into enterprise and cloud data centers.

CIO INFLUENCE: Datadog Releases Data Streams Monitoring to Assess Streaming Data Pipeline Performance

[To share your insights with us, please write to sghosh@martechseries.com]

Related posts

Taos Launches Strategic Advisory Services Suite And DevSecOps Security Subscription Services To Accelerate Enterprise IT Cloud Transformation

Study Reveals Majority Of IT Leaders Consider Technical D*** One Of The Biggest Threats to Innovation As They Build Back

CIO Influence News Desk

NetAlly Releases Private Edition of Link-Live Collaboration, Reporting and Analysis Platform