CIO Influence
CIO Influence News Networking

DreamBig’s “MARS” Chiplet Platform Scales Next-Gen Language Models, Gen AI, & Automotive Semiconductors

DreamBig's "MARS" Chiplet Platform Scales Next-Gen Language Models, Gen AI, & Automotive Semiconductors

This Open Chiplet Platform enables any customer to efficiently scale-up their choice of compute/accelerator Chiplets for applications such as AI training and inference, Automotive, Datacenter, and Edge. DreamBig technology demonstration is showcased at CES 2024 in The Venetian Expo, Bellini 2003 Meeting Room

DreamBig Semiconductor, unveiled “MARS”, a world leading platform to enable a new generation of semiconductor solutions using open standard Chiplets for the mass market. This disruptive platform will democratize silicon by enabling startups or any size company to scale-up and scale-out LLM, Generative AI, Automotive, Datacenter, and Edge solutions with optimized performance and energy efficiency.

PREDICTIONS SERIES 2024 - CIO Influence

Read More: CIO Influence Interview with Rich Nanda, Principal at Deloitte

DreamBig “MARS” Chiplet Platform allows customers to focus investment on the areas of silicon where they can differentiate to have competitive advantage and bring a solution to market faster at lower cost by leveraging the rest of the open standard chiplets available in the platform. This is particularly critical for the fast moving AI training and inference market where the best performance and energy efficiency can be achieved when the solution is application specific.

“DreamBig is disrupting the industry by providing the most advanced open chiplet platform for customers to innovate never before possible solutions combining their specialized hardware chiplets with infrastructure that scales up and out maintaining affordable and efficient modular product development,” said Sohail Syed, CEO of DreamBig Semiconductor.

DreamBig “MARS” Chiplet Platform solves the two biggest technical challenges facing HW developers of AI servers and accelerators – scaling up compute and scaling out networking. The Chiplet Hub is the most advanced 3D memory first architecture in the industry with direct access to both SRAM and DRAM tiers by all compute, accelerator, and networking chiplets for data movement, data caching, or data processing. Chiplet Hubs can be tiled in a package to scale-up at highest performance and energy efficiency. RDMA Ethernet Networking Chiplets provide unparalleled scale-out performance and energy efficiency between devices and systems with independent selection of data path BW and control path packet processing rate.

“Customers can now focus on designing the most innovative AI compute and accelerator technology chiplets optimized for their applications and use the most advanced DreamBig Chiplet Platform to scale-up and scale-out to achieve maximum performance and energy efficiency,” said Steve Majors, SVP of Engineering at DreamBig Semiconductor. “By establishing leadership with 3D HBM backed by multiple memory tiers under HW control in Silicon Box advanced packaging that provides highest performance at lowest cost without the yield and availability issues plaguing the industry, the barriers to scale are eliminated.”

Read More: CIO Influence Interview with Eric Herzog, CMO at Infinidat

The Platform Chiplet Hub and Networking Chiplets offer the following differentiated features:

  • Open standard interfaces and architecture agnostic support for CPU, AI, Accelerator, IO, and Memory Chiplets that customers can compose in a package
  • Secure boot and management of chiplets as a unified system-in-package similar to a platform motherboard of chips
  • Memory First Architecture with direct access from all chiplets to cache/memory tiers including low-latency SRAM/3D HBM stacked on Chiplet Hubs and high-capacity DDR/CXL/SSD on chiplets
  • FLC Technology Group fully associative HW acceleration for cache/memory tiers
  • HW DMA and RDMA for direct placement of data to any memory tier from any local or remote source
  • Algorithmic TCAM HW acceleration for Match/Action when scaled-out to cloud
  • Virtual PCIe/CXL switch for flexible root port or endpoint resource allocation
  • Optimized for Silicon Box advanced Panel Level Packaging to achieve the best performance/power/cost – an alternative to CoWoS for the AI mass market

Customers are currently working with DreamBig on next generation devices for the following use cases:

  • AI Servers and Accelerators
  • High-end Datacenter and Low-end Edge Servers
  • Petabyte Storage Servers
  • DPUs and DPU Smart Switches
  • Automotive ADAS, Infotainment, Zonal Processors

“We are very proud of what DreamBig has achieved establishing leadership in driving a key pillar of the market for high performance, energy conscious, and highly scalable AI solutions to serve the world,” stated Sehat Sutardja and Weili Dai, Co-founders and Chairman/Chairwoman of DreamBig. “The company has raised the technology bar to lead the semiconductor industry by delivering the next generation of open chiplet solutions such as Large Language Model (LLM), Generative AI, Datacenter, and Automotive solutions for the global mass market.”

Read More: CIO Influence Interview with Gee Rittenhouse, Chief Executive Officer at Skyhigh Security

[To participate in our interview series, please write to us at sghosh@martechseries.com]

Related posts

OpenText Integrates N-central Into Webroot Business Endpoint Protection

CIO Influence News Desk

Lambda Raises $320 Million to Build a GPU Cloud for AI

CIO Influence News Desk

UL Launches New SafeCyber Solution and Platform Features to Address Mounting Security Threats