Underlines focus on power efficient solutions, deployment velocity and open networking
Nexthop AI, the leading pioneer of highly efficient AI Networking, launched a range of products for scale-out, scale-across and front-end applications for cloud and AI datacenters. The launch portfolio sets new standards for performance, power efficiency, and deployment speed – very critical metrics in AI infrastructure.
Also Read: CIO Influence Interview With Jake Mosey, Chief Product Officer at Recast
Nexthop AI’s launch portfolio sets new standards for performance, power efficiency, and deployment speed – very critical metrics in AI infrastructure.
Nexthop also unveiled the Disaggregated Spine architecture – a new, highly efficient, scale-across network design developed in collaboration with a large hyperscaler. This innovative architecture decomposes the traditional monolithic chassis running proprietary software into independent, optimized, functional tiers. It features a scale-across leaf tier (data center fabric facing) and scale-across spine tier (data center interconnect facing) offering deep buffers, line-rate MACsec encryption and expanded routing tables. This architecture delivers 30% lower cost and 30% lower power consumption than legacy chassis-based systems and facilitates adoption of open network operating systems such as SONiC.
Driving the Future of Open Networking
Nexthop AI is an active contributor to the open networking ecosystem, serving as a SONiC Governing Board member of the Linux Foundation. In a very short timeframe, Nexthop has risen to be amongst the top 10 global contributors to the SONiC project. Nexthop empowers customers to run their preferred version of a network operating system like SONiC or FBOSS on its switches. In addition, for the NeoClouds, Nexthop offers a hardened, supported, high-quality distribution of the Nexthop Network Operating System powered by SONiC.
“We are delighted to see the tremendous contributions Nexthop has made to the open networking community in such a short time. They are partnering with the community on several new initiatives, including pioneering new concepts like the Disaggregated Spine. Their speed of execution, coupled with their unwavering dedication to customer success, is truly commendable,” said Dave Maltz, Principal Network Architect for Azure Networking at Microsoft.
Pioneering New Platforms for AI and Cloud
Nexthop AI’s launch portfolio sets new standards for performance, power efficiency, and deployment speed – very critical metrics in AI infrastructure. These platforms can run any version of SONiC or FBOSS that hyperscalers choose or are also available as a turnkey solution fully integrated with Nexthop NOS for NeoClouds.
- NH-4010: The industry’s lowest power 51.2 Tbps switch based on Broadcom Tomahawk 5 silicon. Its highly efficient system design is proven to save customers 15-20% power in like-for-like configurations, delivering 10s of Megawatts of total power savings at scale.
- NH-4220: The industry’s highest density 102.4 Tbps air cooled system based on Broadcom Tomahawk 6 silicon. These switches are specifically engineered to provide seamless migration from previous generations without requiring disruptive changes to rack or fiber plants, enabling rapid deployment of next-generation AI clusters.
- NH-5010: The first deep-buffer, scale-across spine switch based on Broadcom Qumran 3D silicon, enabling the Disaggregated Spine architecture for leading Hyperscalers.
“We are pleased to collaborate with Nexthop on SONiC and SAI-based architectures to deliver standards-based Ethernet switching solutions for deployments within and across data centers, serving our hyperscale and data center customers. Nexthop’s deep system expertise and integration of our low-power switching silicon have enabled scalable, highly power-efficient solutions,” said Asad Khamisy, senior vice president and general manager, Core Switching Group, Broadcom.
Nexthop platforms offer advanced real-time telemetry for efficient congestion control, advanced load balancing and real-time layer 1 and optics monitoring – significantly improving overall network performance and link reliability.
“Ethernet Switching is a key building block for AI Networking. Led by new AI and scale-across use case expansions, the market is poised to approach $200B over the next decade,” said Alan Weckel, founder and technology analyst at 650 Group. “Nexthop AI is taking a unique co-development approach to product development and their initial platforms represent the start of a foundational portfolio that raises the bar to fundamentally address the efficiency, density, and reliability challenges to support 800G and 1.6T deployments.”
Catch more CIO Insights: Why CIOs are becoming chief risk orchestrators?
[To share your insights with us, please write to psen@itechseries.com ]

