Winbond Electronics Corporation, a leading global supplier of semiconductor memory solutions, has unveiled a powerful enabling technology for affordable Edge AI computing in mainstream use cases. The Company’s new customized ultra-bandwidth elements (CUBE) enable memory technology to be optimized for seamless performance running generative AI on hybrid edge/cloud applications.
CIO INFLUENCE News: Ivanti Reinforces Commitment to Cybersecurity and Customers with New Product and Engineering Leadership Hires
CUBE enhances the performance of front-end 3D structures such as chip on wafer (CoW) and wafer on wafer (WoW), as well as back-end 2.5D/3D chip on Si-interposer on substrate and fan-out solutions. Designed to meet the growing demands of edge AI computing devices, it is compatible with memory density from 256Mb to 8Gb with a single die, and it can also be 3D stacked to enhance bandwidth while reducing data transfer power consumption.
Winbond is taking a major step forward with CUBE, enabling seamless deployment across various platforms and interfaces. The technology is suited to advanced applications such as wearable and edge server devices, surveillance equipment, ADAS, and co-robots
“The CUBE architecture enables a paradigm shift in AI deployment,” says Winbond. “We believe that the integration of cloud AI and powerful edge AI will define the next phase of AI development. With CUBE, we are unlocking new possibilities and paving the way for improved memory performance and cost optimization on powerful Edge AI device.”
CUBE’s key features include:
- Power efficiency: CUBE delivers exceptional power efficiency, consuming less than 1pJ/bit, ensuring extended operation and optimized energy usage.
- Superior performance: With bandwidth capabilities ranging from 32GB/s to 256GB/s per die, CUBE ensures accelerated performance that exceeds industry standards.
- Compact size: CUBE offers a range of memory capacities from 256Mb to 8Gb per die, based on the 20nm specification now and 16nm in 2025. This allows CUBE to fit into smaller form factors seamlessly. The introduction of through-silicon vias (TSVs) further enhances performance, improving signal and power integrity. Additionally, it reduces the IO area through a smaller pad pitch, as well as heat dissipation, especially when using SoC on the top die and CUBE on the bottom die.
- Cost-Effective Solution with High Bandwidth: Achieving outstanding cost-effectiveness, the CUBE IO boosts an impressive data rate of up to 2Gbps with total 1K IO. When paired with legacy foundry processes like 28nm/22nm SoC, CUBE unleashes ultra-high bandwidth capabilities, reaching an astounding 32GBs-256GB/s (=HBM2 Bandwidth), equivalent to harnessing the power of 4-32pcs*LP-DDR4x 4266Mbps x16 IO bandwidth.
- Reduction in SoC Die Size for Improved Cost Efficiency: By stacking the SoC (top die without TSV) atop the CUBE (bottom die with TSV), it becomes possible to minimize the SoC die size, eliminating any TSV penalty area. This not only enhances cost advantages, but also contributes to the overall efficiency, including small form factor of Edge AI devices.
CIO INFLUENCE News: Mavenir and i2i Systems Partner to Accelerate the Adoption of Open RAN in Türkiye
“CUBE can unleash the full potential of hybrid edge/cloud AI to elevate system capabilities, response time, and energy efficiency,” Winbond added. “Winbond’s commitment to innovation and collaboration will enable developers and enterprises to drive advancement across various industries.”
Winbond is actively engaging with partner companies to establish the 3DCaaS platform, which will leverage CUBE’s capabilities. By integrating CUBE with existing technologies, Winbond aims to offer cutting-edge solutions that empower businesses to thrive in the era of AI-driven transformation.
[To share your insights with us, please write to sghosh@martechseries.com]