CIO Influence
CIO Influence News Cloud Machine Learning

StorONE Unveils ONEai: The First Fully Automated AI Solution Optimized for Enterprise Data Storage

StorONE Unveils ONEai: The First Fully Automated AI Solution Optimized for Enterprise Data Storage

logo

StorONE, the developer of the most efficient storage platform, delivering unmatched data protection and flexibility with minimal hardware, today announcedย ONEai, the turnkey, automated AI solution for enterprise data storage. In partnership withย Phison Electronicsย (8299TT), a leading innovator in NAND flash technologies, StorONE integrated Phisonโ€™s aiDAPTIV+ AI capabilities into the StorONE enterprise storage system to accelerate AI deployment and deliver domain-specific responses on the stored data for end users. ONEai leverages AI GPU and memory optimization, intelligent data placement and built-in support for LLM inferencing and fine-tuning directly within the storage framework, offering an efficient, AI-integrated system with minimal setup complexity. With ONEai, users benefit from reduced power, operational and hardware costs, enhanced GPU performance and on-premises LLM training and inferencing on proprietary organizational data.

Also Read:ย The Agentic AI Revolution: Top 5 Must-Have Agents for Telcos in 2025

As organizations grapple with how to gather and garner insights on stored data, IT leaders and data infrastructure managers are challenged to extract findings from multi-terabyte to -petabyte level data pools with limited AI capabilities. Previously, many that sought to leverage proprietary data in a secure manner were required to build complex AI infrastructure or navigate the regulations and costs of off-premises solutions. The high-performance storage elements of these traditional approaches are often provided as a back-end for LLM training, requiring external orchestration, AI stacks and cloud or hybrid workflows.

To solve this challenge, StorONE partnered with Phison to offer ONEai for fully automated, AI-native LLM training and inferencing capabilities directly within the storage layer. ONEai automatically recognizes and responds to file creation, modification and deletion, delivering real-time insights into data stored in the storage system. This AI-integrated storage solution is optimized for fine-tuning, RAG and inferencing, features integrated GPU memory extensions and simplifies data management via a very user-friendly GUI, eliminating the need for complex infrastructure or external AI platforms.

โ€œONEai sets a new benchmark for an increasingly AI-integrated industry, where storage is the launchpad to take data from a static component to a dynamic application,โ€ said Gal Naor, CEO of StorONE. โ€œThrough this technology partnership with Phison, we are filling the gap between traditional storage and AI infrastructure by delivering a turnkey, automated solution that simplifies AI data insights for organizations with limited budgets or expertise. We’re lowering the barrier to entry to enable enterprises of all sizes to tap into AI-driven intelligence without the requirement of building large-scale AI environments or sending data to the cloud.โ€

โ€œWeโ€™re proud to partner with StorONE to enable a first-of-its-kind solution that addresses challenges in access to expanded GPU memory, high-performance inferencing and larger capacity LLM training without the need for external infrastructure,โ€ said Michael Wu, GM and President of Phison US.ย  โ€œThrough the aiDAPTIV+ integration, ONEai connects the storage engine and the AI acceleration layer, ensuring optimal data flow, intelligent workload orchestration and highly efficient GPU utilization. The result is an alternative to the DIY approach for IT and infrastructure teams, who can now opt for a pre-integrated, seamless, secure and efficient AI deployment within the enterprise infrastructure.โ€

Capabilities in ONEai include:

Integrated AI processing at the storage layer

  • Native LLM training and inference built directly into the storage stack; no external AI infrastructure required
  • ONEai eliminates the need for a separate AI stack or in-house AI expertise (plug-and-play deployment) with full on-premises processing for complete data sovereignty and control over sensitive data

GPU optimization and performance efficiency

  • High GPU efficiency minimizes the number of GPUs required, reducing power and operational costs. Integrated GPU modules reduce AI inference latency and deliver up to 95% hardware utilization

Real world use case alignment

  • Tailored for real customer environments to enable immediate interaction with proprietary data, ONEai automatically tracks and updates changes to data, feeding them into ongoing AI activities.

Also Read:ย Why Cybersecurity-as-a-Service is the Future for MSPs and SaaS Providers

[To share your insights with us as part of editorial or sponsored content, please write toย psen@itechseries.com]

Related posts

ThreatLocker Launches New Solutions, Further Advances Zero Trust Security Resilience and Adoption

GlobeNewswire

Action1 RMM Delivers the Real-Time Visibility and Security Risk Mitigation Required to Support Todayโ€™s Hybrid Workforce

CIO Influence News Desk

Honeywell, SES and Hughes Demonstrate Multinetwork Airborne Connectivity for Government Customers

CIO Influence News Desk