CIO Influence
CIO Influence News Machine Learning Natural Language

Arteris Network-on-Chip Tiling Innovation Accelerates Semiconductor Designs for AI Applications

Arteris Network-on-Chip Tiling Innovation Accelerates Semiconductor Designs for AI Applications

Arteris Logo

  • Scalable Performance: Expanded network-on-chip tiling supported by mesh topology capabilities in FlexNoC and Ncore interconnect IP products allow systems-on-chip with AI to easily scale by more than 10 times without changing the basic design, meeting AI’s huge demand for faster and more powerful computing.

  • Power Reduction: Network-on-chip tiles can be turned off dynamically, cutting power by 20% on average, essential for more energy-efficient and sustainable AI applications with lower operating costs.

  • Design Reuse: Pre-tested network-on-chip tiles can be reused, cutting the SoC integration time by up to 50% and shortening the time to market for AI

Arteris, a leading provider of system IP which accelerates system-on-chip (SoC) creation, today announced an innovative evolution of its network-on-chip (NoC) IP products with tiling capabilities and extended mesh topology support for faster development of Artificial Intelligence (AI) and Machine Learning (ML) compute in system-on-chip (SoC) designs. The new functionality enables design teams to scale compute performance by more than 10 times while meeting project schedules plus power, performance and area (PPA) goals.

Also Read: TrustCloud Unveils Quantum Vault in DocuSign’s App Center: The Future of Post-Quantum Encryption Preservation

Network-on-chip tiling is an emerging trend in SoC design. The evolutionary approach uses proven, robust network-on-chip IP to facilitate scaling, condense design time, speed testing and reduce design risk. It allows SoC architects to create modular, scalable designs by replicating soft tiles across the chip. Each soft tile represents a self-contained functional unit, enabling faster integration, verification and optimization.

Tiling coupled with mesh topologies within Arteris’ flagship NoC IP products, FlexNoC and Ncore, are transformative for the ever-growing inclusion of AI compute into most SoCs. AI-enabled systems are growing in size and complexity yet can be quickly scaled with the addition of soft tiles without disrupting the entire SoC design. Together, the combination of tiling and mesh topologies provides a way to further reduce the auxiliary processing unit (XPU) sub-system design time and overall SoC connectivity execution time by up to 50% versus manually integrated, non-tiled designs.

Also Read: Addressing the Short and Long Term Challenges of Today’s Modern Data Landscape

The first iteration of NoC tiling organizes Network Interface Units (NIUs) into modular, repeatable blocks, improving scalability, efficiency and reliability in SoC designs. These SoC designs result in increasingly larger and more advanced AI compute which supports fast-growing, sophisticated AI workloads for Vision, Machine Learning (ML) models, Deep Learning (DL), Natural Language Processing (NLP) including Large Language Models (LLMs), and Generative AI (GAI), both for training and inference, including at the edge.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Related posts

Singapore-based Start-up SiNBLE Launches IC Design Implementation Service

Business Wire

Virtuozzo Acquires Jelastic Business to Offer First Full-Stack Cloud Management Solution for Service Providers

Cribl Releases Product Enhancements Across Portfolio to Simplify and Personalize All Observability Data

PR Newswire