-
CoreWeave customers will have access to a new GPU that improves performance, training and inference times for AI many times over
-
The companyโs Kubernetes-native infrastructure yields industry-leading spin-up times and responsive auto-scaling capabilities for optimal compute usage and performance
-
Customers pay only for the compute capacity they use, making CoreWeave 50% to 80% less expensive than competitors
CoreWeave, a specialized cloud provider built for large-scale GPU-accelerated workloads, today announced it is among the first to offer cloud instances withย NVIDIA HGX H100 supercomputing. CoreWeave, Amazon, Google, Microsoft and Oracle are the first cloud providers included in the launch of this groundbreaking AI platform. CoreWeave was the first Elite Cloud Service Provider for Compute in the NVIDIA Partner Network (NPN) and is also among the NPNโs Elite Cloud Service Providers for Visualization.
Latest ITechnology News:ย Plexigrid Powers its Software Solution for Managing Complex Electricity Distribution Networks with TigerGraph
โThis validates what weโre building and where weโre heading,โ said Michael Intrator, CoreWeave co-founder and CEO. โCoreWeaveโs success will continue to be driven by our commitment to making GPU-accelerated compute available to startup and enterprise clients alike. Investing in the NVIDIA HGX H100 platform allows us to expand that commitment, and our pricing model makes us the ideal partner for any companies looking to run large-scale, GPU-accelerated AI workloads.โ
NVIDIAโs ecosystem and platform are the industry standard for AI. The NVIDIA HGX H100 platform allows a leap forward in the breadth and scope of AI work businesses can now tackle. The NVIDIA HGX H100 enables up to seven times better efficiency in high-performance computing (HPC) applications, up to nine times faster AI training on the largest models and up to 30 times faster AIย inferenceย than the NVIDIA HGX A100. That speed, combined with the lowest NVIDIA GPUDirect network latency in the market with the NVIDIA Quantum-2 InfiniBand platform, reduces the training time of AI models toย โdays or hours instead of months.โย Such technology is critical now that AI has permeated every industry.
โAI and HPC workloads require a powerful infrastructure that delivers cost-effective performance and scale to meet the needs of todayโs most demanding workloads and applications,โ said Dave Salvator, director of product marketing at NVIDIA. โCoreWeaveโs new offering of instances featuring NVIDIA HGX H100 supercomputers will enable customers the flexibility and performance needed to power large-scale HPC applications.โ
Latest ITechnology News:ย Liquid-Markets Announces a Range of Intel FPGA-based Products for Financial Services and Beyond
In the same way that drivers of fuel-efficient cars save money on gas, CoreWeave clients spend between 50% to 80% less on compute resources. The companyโs performance-adjusted cost structure is two-fold. First, clients only pay for the HPC resources they use, and CoreWeave cloud instances are highly configurable. Second, CoreWeaveโs Kubernetes-native infrastructure and networking architecture produce performance advantages, including industry-leading spin-up times and responsive auto-scaling capabilities that allow clients to use compute more efficiently. CoreWeave competitors charge for idle compute capacity to maintain access to GPUs and use legacy-networking products that degrade performance with scale.
โCoreWeaveโs infrastructure is purpose-built for large-scale GPU-accelerated workloads โ we specialize in serving the most demanding AI and machine learning applications,โ said Brian Venturo, CoreWeave co-founder and chief technology officer. โWe empower our clients to create world-changing technology by delivering practical access to high-performance compute at scale, on top of the industryโs fastest and most flexible infrastructure.โ
CoreWeave leverages a range of open-source Kubernetes projects, integrates with best-in-class technologies such asย Determined.AIย and offers support for open-source AI models includingย Stable Diffusion,ย GPT-NeoX-20Bย andย BLOOMย as part of its mission to lead the world in AI and machine learning infrastructure.
Founded in 2017, CoreWeave provides fast, flexible, and highly available GPU compute resources that are up to 35 times faster and 80% less expensive than large, generalized public clouds. An Elite Cloud Service Provider for Compute and Visualization in the NPN, CoreWeave offers cloud services for compute-intensive projects, including AI, machine learning, visual effects and rendering, batch processing and pixel streaming. CoreWeaveโs infrastructure is purpose-built for burstable workloads, with the ability to scale up or down in seconds
Latest ITechnology News:ย Europeโs IT and Business Services Market Continues to Grow, but at Slower Pace, ISG Index Finds
[To share your insights with us, please write toย sghosh@martechseries.com]

