CIO Influence
AIOps CIO Influence News Data Center and Co-location

d-Matrix Launches New Chiplet Connectivity Platform to Address Exploding Compute Demand for Generative AI

d-Matrix Launches New Chiplet Connectivity Platform to Address Exploding Compute Demand for Generative AI

d-Matrix, a leader in high-efficiency AI-compute and inference processors, announced Jayhawk, the industry’s first Open Domain-Specific Architecture (ODSA) Bunch of Wires (BoW) based chiplet platform for energy efficient die-die connectivity over organic substrates. Building on the back of the Nighthawk chiplet platform launched in 2021, the 2nd generation Jayhawk silicon platform further builds the scale-out chiplet based inference compute platform. d-Matrix customers will be able to use the inference compute platforms to manage Generative AI applications and Large Language Model transformer applications with a 10-20X improvement in performance.

Large transformer models are creating new demands for AI inference at the same time that memory and energy requirements are hitting physical limits. d-Matrix provides one of the first Digital In-Memory Compute (DIMC) based inference compute platforms to come to market, transforming the economics of complex transformers and Generative AI with a scalable platform built to handle the immense data and power requirements of inference AI. Improving performance can make energy-hungry data centers more efficient while reducing latency for end users in AI applications.

CIO INFLUENCE: Anglicare Leverages Ribbon and Switch Connect for Voice Consolidation and Path for Microsoft Teams Deployment

“With the announcement of our 2nd generation chiplet platform, Jayhawk, and a track record of execution, we are establishing our leadership in the chiplet ecosystem,” said Sid Sheth, CEO of d-Matrix. “The d-Matrix team has made great progress towards building the world’s first in-memory computing platform with a chiplet-based architecture targeted for power hungry and latency sensitive demands of generative AI.”

d-Matrix’s novel compute platform uses an ingenious combination of an in-memory compute-based IC architecture, sophisticated tools that integrate with leading ANN models, and chiplets in a block grid formation to support scalability and efficiency for demanding ML workloads. By using a modular chiplet-based approach, data center customers can refresh compute platforms on a much faster cadence using a pre-validated chiplet architecture. To enable this, d-Matrix plans to build chiplets based on both BoW and UCIe based interconnects to enable a truly heterogeneous computing platform that can accommodate 3rd party chiplets.

CIO INFLUENCE: Datometry Releases Driver Integration for BigQuery, Further Future-Proofing Its Customers’ Investments

“d-Matrix has moved quickly to seize the chiplet opportunity, which should give them a first-mover advantage,” said Karl Freund, Founder and Principal Analyst at Cambrian-AI Research. “Anyone looking to add an AI accelerator to their SoC design would do well to investigate this new approach for efficient AI.”

The Jayhawk chiplet platform features:

  • 3mm, 15mm, 25 mm trace lengths on organic substrate
  • 16 Gbps/wire high bandwidth throughput
  • 6-nm TSMC process technology
  • <0.5 pJ/bit energy efficiency

CIO INFLUENCE: Ericsson presents a Green Financing Framework

[To share your insights with us, please write to sghosh@martechseries.com]

Related posts

Connection Launches Helix Center for Applied AI and Robotics

Business Wire

XSi Expands Geographies Served for Expanding IT Lifecycle Service Suite

Cybertrust Deploys Verimatrix IoT Security for Customers Across Japan