DiRAC Selects NVIDIA HDR InfiniBand Connected HGX Platform to Accelerate Scientific Discovery at Its Four Sites
NVIDIA announced that its NVIDIA HGX high performance computing platform will power Tursa, the new DiRAC supercomputer to be hosted by the University of Edinburgh.
Optimized for computational particle physics, Tursa is the third of four DiRAC next-generation supercomputers formally announced that will be accelerated by one or more NVIDIA HGX platform technologies, including NVIDIA A100 Tensor Core GPUs, NVIDIA HDR 200Gb/s InfiniBand networking and NVIDIA Magnum IO software. The final DiRAC next-generation supercomputer is to feature NVIDIA InfiniBand networking.
Recommended ITech News: Tableau Completes Redesigned Partner Network to Help Customers Accelerate Data Transformations
Tursa will allow researchers to carry out the ultra-high-precision calculations of the properties of subatomic particles needed to interpret data from massive particle physics experiments, such as the Large Hadron Collider.
“DiRAC is helping researchers unlock the mysteries of the universe,” said Gilad Shainer, senior vice president of networking at NVIDIA. “Our collaboration with DiRAC will accelerate cutting-edge scientific exploration across a diverse range of workloads that take advantage of the unrivaled performance of NVIDIA GPUs, DPUs and InfiniBand in-network computing acceleration engines.”
“Tursa is designed to tackle unique research challenges to unlock new possibilities for scientific modeling and simulation,” said Luigi Del Debbio, professor of theoretical physics at the University of Edinburgh and project lead for the DiRAC-3 deployment. “The NVIDIA accelerated computing platform enables the extreme-scaling service to propel new discoveries by precisely balancing network bandwidth and flops to achieve the unrivaled performance our research demands.”
The Tursa supercomputer, built with Atos and expected to go into operation later this year, will feature 448 NVIDIA’s A100 Tensor Core GPUs and include 4x NVIDIA HDR 200Gb/s InfiniBand networking adapters per node. NVIDIA Magnum IO GPUDirect® RDMA enables the system to provide the highest level of internode bandwidth and scalability for extreme-scale scientific applications using Lattice Quantum ChromoDynamics.
Recommended ITech News: Smithers Receives C3PAO Candidate Status To Provide Cybersecurity Maturity Model Certification Security Assessment
The system is run by DiRAC — the UK’s integrated supercomputing facility for theoretical modeling and HPC-based research in astronomy, cosmology, particle physics and nuclear physics — with sites hosted at the University of Cambridge, Durham University, the University of Edinburgh and the University of Leicester.
CSD3 at University of Cambridge, COSMA-8 at Durham University
NVIDIA’s announced at GTC 21 in April that the Cambridge Service for Data Driven Discovery, also known as CSD3, will be enhanced with a new 4-petaflops Dell-EMC system with NVIDIA HGX A100 GPUs, BlueField® DPUs and NVIDIA HDR 200Gb/s InfiniBand networking, which will deliver secured, multi-tenant, bare-metal HPC, AI and data analytics services for a broad cross section of the U.K. research community. CSD3 is projected to rank among the world’s top 500 supercomputers. The DiRAC Data Intensive Service at Cambridge is part of the CSD3 system.
NVIDIA’s also announced at GTC 21 that Durham University’s new COSMA-8 supercomputer — to be used by world-leading cosmologists in the U.K. to research the origins of the universe — will be based on Dell technology and accelerated by NVIDIA’s HDR 200Gb/s InfiniBand networking.
Recommended ITech News: High QA Provides Advanced Quality Process Capabilities with Inspection Manager 6.0 Software