CIO Influence
AIOps CIO Influence News Cloud

Run:ai Completes Proof of Concept with NVIDIA to Maximize GPU Workload Flexibility on Any Cloud

Runai Completes Proof of Concept with NVIDIA to Maximize GPU Workload Flexibility on Any Cloud
Run:ai deployed on NVIDIA VMIs enables multi-cloud scaling as well as ‘lift & shift’ cloud deployments

Run:ai, the company simplifying AI infrastructure orchestration and management, announced details of a completed proof of concept (POC) which enables multi-cloud GPU flexibility for companies using NVIDIA GPUs in the cloud. NVIDIA’s software suite includes virtual machine images, or VMIs, which are optimized for NVIDIA GPUs running in clouds such as Amazon Web Services, Microsoft Azure, Google Cloud, and Oracle Cloud. Run:ai software deployed on NVIDIA VMIs enables cloud customers to move AI workloads from one cloud to another, as well as to use multiple clouds simultaneously for different AI workloads with zero code changes.

Run:ai’s workload-aware orchestration ensures that every type of AI workload gets the right amount of compute resources when needed, and provides deep integration into NVIDIA GPUs to achieve optimal utilization of these resources. Run:ai’s Kubernetes-based Atlas platform and NVIDIA VMIs were used together in the POC to support ‘lift & shift’ as well as multi-node scaling in the cloud. NVIDIA customers and partners can de-risk their AI cloud deployments with a streamlined and portable solution for cloud AI infrastructure from Run:ai. Customers looking to cost-optimize their cloud computing resources can choose among supported cloud providers for the best-fit configuration. They can also manage AI workloads on multiple clouds with a single control plane.

Top iTechnology Networking News: Aqua Comms, Bulk Fiber Networks and Meta Complete Construction of the Havhingsten Cable System

NVIDIA VMIs are available on each of the major public cloud providers. NVIDIA publishes these with regular updates to both OS and drivers. The VMIs are optimized for performance on the latest generations of NVIDIA GPUs and allow for easy and fast deployment of GPU-accelerated instances on the public cloud.

“By combining accelerated computing power from NVIDIA with Run:ai’s Atlas platform, organizations have a stellar AI foundation that enables them to successfully deliver on their AI initiatives,” said Omri Geller, CEO and co-founder of Run:ai. “We appreciate the close relationship we have with the NVIDIA cloud team and their commitment to support NVIDIA accelerated computing customers everywhere.”

“From innovative startups to world-leading enterprises, NVIDIA-accelerated cloud computing provides customers with flexible options for powering their most demanding workloads,” said Paresh Kharya, senior director, Accelerated Computing at NVIDIA. “Paired with NVIDIA-accelerated instances from leading cloud service providers, the Run:ai Atlas platform helps customers maximize the efficiency and value of AI workload operations.”

Top iTechnology Cloud News: IPC Launches Connexus ALPHA

The Run:ai Atlas Platform brings simplicity to GPU management by providing researchers with on-demand access to pooled resources for any AI workload and has built-in integration with NVIDIA Triton Inference Server, NVIDIA’s open source inference serving software that lets teams deploy trained AI models from any framework on GPU or CPU infrastructure.

As an innovative cloud-native operating-system which includes a workload-aware scheduler and a GPU abstraction layer, the platform helps IT managers simplify AI implementation, increase team productivity, and gain full utilization of GPUs. Run:ai now offers a simple solution to teams with a multi-cloud AI infrastructure strategy. The solution is available in beta – reach out to partners@run.ai to learn more.

Additionally, Run:ai and NVIDIA are further expanding their collaboration to support customers who are operationalizing AI development. Run:ai is among the NVIDIA DGX-Ready Software partners joining the NVIDIA AI Accelerated program, which offers customers validated, enterprise-grade workflow and cluster management, scheduling and orchestration solutions for a variety of NVIDIA accelerated systems.

Top iTechnology Security News: OPSWAT Launches World’s First Interactive Mobile Lab for Critical Infrastructure Organizations

[To share your insights with us, please write to sghosh@martechseries.com]

Related posts

Anyscale Launches Aviary Open Source Infrastructure to Simplify Large Language Model Deployment

GlobeNewswire

Entrata Appoints Jason Taylor as New Chief Technology Officer

ASUS Servers Announce AI Developments at NVIDIA GTC

CIO Influence News Desk

Leave a Comment