Unveils IntelliProp Omega Memory Fabric Chips Which Allows for Dynamic Allocation and Sharing of Memory Across Compute Domains – Both In and Out of Server.
IntelliProp, a leading innovator of composable data center transformation technology, announced its intent to deliver its disruptive Omega Memory Fabric chips. The chips incorporate the Compute Express Link (CXL) Standard, along with IntelliProp’s innovative Fabric Management Software and Network Attached Memory (NAM) system. In addition, the company announced the availability of three field-programmable gate array (FPGA) solutions built with its Omega Memory Fabric.
Latest ITechnology News: Lightning AI Releases Cloud App Built on NVIDIA Omniverse to Generate 3D Synthetic Data
The Omega Memory Fabric eliminates memory bottleneck and allows for dynamic allocation and sharing of memory across compute domains both in and out of the server, delivering on the promise of Composable Disaggregated Infrastructure (CDI) and rack scale architecture, an industry first. IntelliProp’s memory-agnostic innovation will lead to the adoption of composable memory and transform data center energy, performance, efficiency and cost.
As data continues to grow, database and AI applications are being constrained on memory bandwidth and capacity. At the same time billions of dollars are being wasted on stranded and unutilized memory. According to a recent Carnegie Mellon / Microsoft report , Google stated that average DRAM utilization in its datacenters is 40%, and Microsoft Azure said that 25% of its server DRAM is stranded.
“IntelliProp’s efforts in extending CXL connection beyond simple memory expansion demonstrates what is achievable in scaled out, composable data center resources,” said Jim Pappas, Chairman of the CXL Consortium. “Their advancements on both CXL and Gen-Z hardware and management software components has strengthened the CXL ecosystem.”
Latest ITechnology News: Starburst Advances Data Product Functionality to Serve Cross-Border Use Cases
Experts agree that memory disaggregation increases memory utilization and reduces stranded or underutilized memory. Today’s remote direct memory access (RDMA)-based disaggregation has too much overhead for most workloads and virtualization solutions are unable to provide transparent latency management. The CXL standard offers low-overhead memory disaggregation and provides a platform to manage latency.
“History tends to repeat itself. NAS and SAN evolved to solve the problems of over/under storage utilization, performance bottlenecks and stranded storage. The same issues are occuring with memory,” stated John Spiers, CEO, IntelliProp. “Our trailblazing approach to CXL technology unlocks memory bottlenecks and enables next-generation performance, scale and efficiency for database and AI applications. For the first time, high-bandwidth, petabyte-level memory can be deployed for vast in-memory datasets, minimizing data movement, speeding computation and greatly improving utilization. We firmly believe IntelliProp’s technology will drive disruption and transformation in the data center, and we intend to lead the adoption of composable memory.”
Omega Memory Fabric/ NAM System, Powered by IntelliProp’s ASIC
IntelliProp’s Omega Memory Fabric and Management Software enables the enterprise composability of memory, and CXL devices, including storage. Powered by IntelliProp’s ASIC, the Omega Memory Fabric based NAM System and software expands the connection and sharing of memory in and outside the server, placing memory pools where needed. The Omega NAM is well suited for AI, ML, big data, HPC, cloud and hyperscale / enterprise data center environments, specifically targeting applications requiring large amounts of memory.
“In a survey IDC completed in early 2022, almost half of enterprise respondents indicated that they anticipate memory-bound limitations for key enterprise applications over time,” said Eric Burgener, research vice president, Infrastructure Systems, Platforms and Technologies Group, IDC. “New memory pooling technologies like what IntelliProp is offering with their NAM system will help to address this concern, enabling dynamic allocation and sharing of memory across servers with high performance and without hardware slot limitations. The composable disaggregated infrastructure market that IntelliProp is playing in is an exciting new market that is expected to grow at a 28.2 percent five-year compound annual growth rate to crest at $4.8 billion by 2025.”
With IntelliProp’s Omega Memory Fabric and Management Software, hyperscale and enterprise customers will be able to take advantage of multiple tiers of memory with predetermined latency. The system will enable large memory pools to be placed where needed, allowing multiple servers to access the same dataset. It also allows new resources to be added with a simple hot plug, eliminating server downtime and rebooting for upgrades.
“IntelliProp is on to something big. CXL disaggregation is key, as half of the cost of a server is memory. With CXL disaggregation, they are taking memory sharing to a whole new level,” said Marc Staimer, Dragon Slayer analyst. “IntelliProp’s technology makes large pools of memory shareable between external systems. That has immense potential to boost data center performance and efficiency while reducing overall system costs.”
Omega Memory Fabric Features, incorporating the CXL Standard
- Scale and share memory outside the server
- Dynamic multi-pathing and allocation of memory
- E2E security using AES-XTS 256 w/ addition of integrity
- Supports non-tree topologies for peer-to-peer
- Direct path from GPU to memory
- Management scaling for large deployments using multi-fabrics/ subnets and distributed managers
- Direct memory access (DMA) allows data movement between memory tiers efficiently and without locking up CPU cores
- Memory agnostic and up to10x faster than RDMA
“AI is one of the world’s most demanding applications, in terms of compute and storage. The prospects of using ML in genomics, for example, requires exascale compute and low latency access to petabytes of storage. The ability to dynamically allocate shareable pools of memory over the network and across compute domains is a feature we are very excited about,” says Nate Hayes, Co-Founder and Board Member at RISC AI. “We think the fabric from IntelliProp provides the latency, scale and composable disaggregated infrastructure for the next generation AI training platform we are developing at RISC AI, and this is why we are planning to integrate IntelliProp’s technology into the high performance RISC-V processors that we will be manufacturing.”
Omega Memory Fabric Solutions Bring Future CXL Advantages to Data Centers
IntelliProp unveiled three FPGA solutions as part of its Omega Fabric product suite. The solutions connect CXL devices to CXL hosts, allowing data centers to increase performance, scale across dozens to thousands of host nodes, consume less energy since data travels with fewer hops and enable mixed use of shared DRAM (fast memory) and shared SCM (slow memory), allowing for lower total cost of ownership (TCO).
Latest ITechnology News: Samsung Tapped to Support Comcast’s 5G Connectivity Efforts
[To share your insights with us, please write to sghosh@martechseries.com]