- Solidigm’s new AI lab redefines what dense SSD storage can achieve
- The cluster delivers record throughput, yet questions linger over true scalability
- 23.6 petabytes squeezed into 16U challenges how storage is viewed
Solidigm has opened its AI Central Lab at the FarmGPU site near its headquarters in Rancho Cordova, California.
The facility is presented as a place to study how storage interacts with artificial intelligence workloads using high-performance GPUs and dense storage arrays.
The company says the lab provides one of the most compact large-scale clusters in the industry, intended to replicate conditions found in modern data centers.
Testing claims of record performance
At the core of the announcement are Solidigm’s D7-PS1010 SSDs, which were used to reach 116GB/s per node in MLPerf Storage benchmarks.
This figure is described as a record result, although the value of such synthetic testing for real-world AI operations remains open to debate.
Benchmarks often highlight peak throughput, but actual workload performance can depend heavily on software and system integration.
Still, the lab offers a platform where storage vendors, developers, and partners can run experiments under controlled but relevant conditions.
“Our Solidigm AI Central Lab combines today’s most powerful GPUs with leading storage infrastructure to unlock new levels of testing and joint innovation for our customers and the developer community,” said Avi Shetty, Senior Director, AI Ecosystems and Partnerships, Solidigm.
“These were capabilities previously only available to select companies, and Solidigm is now enabling them while demonstrating the criticality of having storage close to the GPU.”
Perhaps the most eye-catching aspect is the claim of 23.6PB of storage packed into only 16U of rack space.
This was achieved using 192 Solidigm D5-P5336 SSDs, each with 122TB capacity.
By volume, it may represent one of the densest enterprise storage clusters deployed publicly.
This raises questions about how such hardware compares against the best HDD options that continue to offer lower cost per terabyte.
If performance demands outweigh cost efficiency, Solidigm’s setup could be seen as a step toward defining the largest SSD-based systems in production environments.
Solidigm emphasized collaboration with partners such as Metrum AI, which claims to have cut DRAM usage during retrieval-augmented generation by as much as 57% by offloading data to SSDs.
Such claims suggest potential benefits for memory management, though they also highlight the dependence of AI efficiency on tightly coupled hardware and software tuning.
While the best SSD units in the lab can demonstrate impressive speed and density, the broader market may weigh these gains against practical considerations like energy use, scalability, and long-term cost.
Solidigm’s AI Central Lab positions itself as a space for both innovation and marketing demonstration.
“Running storage tests isn’t enough anymore. In our AI Central Lab, we can execute real-world AI workloads and use our cutting-edge telemetry capabilities to optimize systems for performance and efficiency and gain insights into the storage needs of emerging workloads,” Shetty said.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
Add Comment