CIO Influence
CIO Influence News Machine Learning

CoreWeave Among First Cloud Providers to Offer NVIDIA HGX H100 Supercomputers Set to Transform AI Landscape

CoreWeave Among First Cloud Providers to Offer NVIDIA HGX H100 Supercomputers Set to Transform AI Landscape
  • CoreWeave customers will have access to a new GPU that improves performance, training and inference times for AI many times over

  • The companyโ€™s Kubernetes-native infrastructure yields industry-leading spin-up times and responsive auto-scaling capabilities for optimal compute usage and performance

  • Customers pay only for the compute capacity they use, making CoreWeave 50% to 80% less expensive than competitors

CoreWeave, a specialized cloud provider built for large-scale GPU-accelerated workloads, today announced it is among the first to offer cloud instances withย NVIDIA HGX H100 supercomputing. CoreWeave, Amazon, Google, Microsoft and Oracle are the first cloud providers included in the launch of this groundbreaking AI platform. CoreWeave was the first Elite Cloud Service Provider for Compute in the NVIDIA Partner Network (NPN) and is also among the NPNโ€™s Elite Cloud Service Providers for Visualization.

Latest ITechnology News:ย Plexigrid Powers its Software Solution for Managing Complex Electricity Distribution Networks with TigerGraph

โ€œThis validates what weโ€™re building and where weโ€™re heading,โ€ said Michael Intrator, CoreWeave co-founder and CEO. โ€œCoreWeaveโ€™s success will continue to be driven by our commitment to making GPU-accelerated compute available to startup and enterprise clients alike. Investing in the NVIDIA HGX H100 platform allows us to expand that commitment, and our pricing model makes us the ideal partner for any companies looking to run large-scale, GPU-accelerated AI workloads.โ€

NVIDIAโ€™s ecosystem and platform are the industry standard for AI. The NVIDIA HGX H100 platform allows a leap forward in the breadth and scope of AI work businesses can now tackle. The NVIDIA HGX H100 enables up to seven times better efficiency in high-performance computing (HPC) applications, up to nine times faster AI training on the largest models and up to 30 times faster AIย inferenceย than the NVIDIA HGX A100. That speed, combined with the lowest NVIDIA GPUDirect network latency in the market with the NVIDIA Quantum-2 InfiniBand platform, reduces the training time of AI models toย โ€œdays or hours instead of months.โ€ย Such technology is critical now that AI has permeated every industry.

โ€œAI and HPC workloads require a powerful infrastructure that delivers cost-effective performance and scale to meet the needs of todayโ€™s most demanding workloads and applications,โ€ said Dave Salvator, director of product marketing at NVIDIA. โ€œCoreWeaveโ€™s new offering of instances featuring NVIDIA HGX H100 supercomputers will enable customers the flexibility and performance needed to power large-scale HPC applications.โ€

Latest ITechnology News:ย Liquid-Markets Announces a Range of Intel FPGA-based Products for Financial Services and Beyond

In the same way that drivers of fuel-efficient cars save money on gas, CoreWeave clients spend between 50% to 80% less on compute resources. The companyโ€™s performance-adjusted cost structure is two-fold. First, clients only pay for the HPC resources they use, and CoreWeave cloud instances are highly configurable. Second, CoreWeaveโ€™s Kubernetes-native infrastructure and networking architecture produce performance advantages, including industry-leading spin-up times and responsive auto-scaling capabilities that allow clients to use compute more efficiently. CoreWeave competitors charge for idle compute capacity to maintain access to GPUs and use legacy-networking products that degrade performance with scale.

โ€œCoreWeaveโ€™s infrastructure is purpose-built for large-scale GPU-accelerated workloads โ€” we specialize in serving the most demanding AI and machine learning applications,โ€ said Brian Venturo, CoreWeave co-founder and chief technology officer. โ€œWe empower our clients to create world-changing technology by delivering practical access to high-performance compute at scale, on top of the industryโ€™s fastest and most flexible infrastructure.โ€

CoreWeave leverages a range of open-source Kubernetes projects, integrates with best-in-class technologies such asย Determined.AIย and offers support for open-source AI models includingย Stable Diffusion,ย GPT-NeoX-20Bย andย BLOOMย as part of its mission to lead the world in AI and machine learning infrastructure.

Founded in 2017, CoreWeave provides fast, flexible, and highly available GPU compute resources that are up to 35 times faster and 80% less expensive than large, generalized public clouds. An Elite Cloud Service Provider for Compute and Visualization in the NPN, CoreWeave offers cloud services for compute-intensive projects, including AI, machine learning, visual effects and rendering, batch processing and pixel streaming. CoreWeaveโ€™s infrastructure is purpose-built for burstable workloads, with the ability to scale up or down in seconds

Latest ITechnology News:ย Europeโ€™s IT and Business Services Market Continues to Grow, but at Slower Pace, ISG Index Finds

[To share your insights with us, please write toย sghosh@martechseries.com]

Related posts

Command The 5G Network VIAVI Introduces Industry’s First Field Test Instrument for O-RAN Deployment

CIO Influence News Desk

Zeta Marketing Platform Becomes the First Marketing Cloud Available in AWS Marketplace

Atos Unveils the BullSequana SH Server for Secure, Carbon-efficient, Hybrid Computing and BullSequana EX Series for Trusted AI Applications at the Edge

CIO Influence News Desk