WEKA, the AI storage company, announced the next generation of its WEKApod
WEKA announced the next generation of its WEKApod
appliances to redefine AI storage economics.
WEKApod appliances are the easiest and fastest way to deploy and scale NeuralMesh
Marketing Technology News: MarTech Interview with Stephen Howard-Sarin, MD of Retail Media, Americas @ Criteo
The Infrastructure Efficiency Crisis
Organizations investing in AI infrastructure face mounting pressure to demonstrate ROI as valuable GPU resources sit underutilized, training cycles extend timelines, inference costs consume margins, and cloud bills spiral with data growth. Traditional storage systems force an impossible choice: extreme performance or manageable costs, never both. Meanwhile, datacenter constraints—limited rack space, power budgets, and cooling capacity—mean every rack unit must deliver more capability or infrastructure costs spiral out of control.
The next-generation WEKApod family directly addresses these business challenges head-on. WEKApod Prime eliminates the forced trade-off between performance and cost, delivering 65% better price-performance through intelligent data placement that automatically optimizes placement based on workload characteristics—ensuring writes maintain full performance while achieving breakthrough economics.
Now AI Builders and Cloud Providers Can Eliminate the Performance-Cost Trade-off
WEKApod Prime: offers a unique approach to mixed-flash technology, intelligently combining TLC and eTLC flash drives in configurations supporting up to 20 drives in a 1U rack or 40 drives in a 2U rack configuration. Unlike traditional tiered storage solutions that introduce cache hierarchies, data movement between tiers, and write performance penalties, WEKApod’s AlloyFlash delivers consistent performance across all operations—including full write performance without throttling—while achieving breakthrough economics. No compromises, no write-performance penalties, no cache hierarchies. WEKApod Prime is already being used by leading AI cloud pioneers like the Danish Centre for AI Innovation (DCAI).
The result is exceptional density that directly addresses datacenter resource constraints: 4.6x better capacity density, 5x better write IOPS per rack unit (versus previous generation), 4x better power density at 23,000 IOPS per kW (or 1.6 PB per kW), and 68% less power consumption per terabyte while maintaining the extreme performance AI workloads demand. For write-intensive AI workloads like training and checkpointing, this means storage keeps pace without performance penalties that can idle expensive GPUs during critical operations.
WEKApod Nitro: purpose-built for AI factories running hundreds or thousands of GPUs, delivers 2x faster performance and 60% better price-performance through upgraded hardware, including NVIDIA ConnectX-8 SuperNIC, delivering 800 Gb/s throughput and 20 TLC drives in a compact 1U form factor. As turnkey NVIDIA DGX SuperPOD and NVIDIA Cloud Partner (NCP)-certified appliances, both WEKApod configurations eliminate weeks of integration work, allowing organizations to bring customer services to market in days rather than months while ensuring storage infrastructure keeps pace with next-generation accelerators.
Marketing Technology News: From MarTech Stack to MarTech Fabric: Weaving Brand, Content, and Conversion Into One Thread
Measurable Business Impact Across AI Deployments
AI infrastructure providers are already seeing direct business impact from these innovations:
“Space and power are the new limits of innovation in data centres. WEKApod’s exceptional storage performance density allows us to deliver hyperscaler-level data throughput and efficiency within an optimised footprint—unlocking more AI capability per kilowatt and square metre,” said Nadia Carlsten, CEO, Danish Centre for AI Innovation (DCAI). “This efficiency directly improves economics and accelerates how we bring AI innovation to our customers.”
“AI investments must demonstrate ROI. WEKApod Prime delivers 65% better price-performance without compromising on speed, while WEKApod Nitro doubles performance to maximize GPU utilization. The result: faster model development, higher inference throughput, and better returns on compute investments that directly impact profitability and time-to-market,” said Ajay Singh, Chief Product Officer at WEKA.
“Networking is essential to AI infrastructure, transforming AI compute and storage into a thinking platform that generates and delivers tokens of digital intelligence at scale,” said Kevin Deierling, senior vice president of Networking at NVIDIA. “With NVIDIA Spectrum-X Ethernet and NVIDIA ConnectX-8 networking at the foundation of WEKApod, WEKA is helping enterprises eliminate data bottlenecks— which is critical to optimize AI performance.”
The post WEKA Unveils Next-Gen WEKApod Appliances to Redefine AI Storage Economics first appeared on PressReleaseCC.
WEKA Unveils Next-Gen WEKApod Appliances to Redefine AI Storage Economics first appeared on Web and IT News.
NEW YORK – Freelancers often face unique challenges during tax season. Unlike traditional employees who…
Oligonucleotide CDMO Market by Service (Contract Manufacturing (Clinical, Commercial), Development), Type (ASO, SiRNA, (CPG Oligos,…
Microsoft (US), IBM (US), SAP (Germany), Oracle (US), Salesforce (US), MicroStrategy (US), SAS Institute (US),…
Digital Identity Solutions Market by Hardware (RFID Reader & Encoder, Hardware-Based Tokens, Processor ID Cards),…
Battery Energy Storage System (BESS) Market The Battery Energy Storage System (BESS) Market is projected…
Oligonucleotide Synthesis Market by Product ((Drugs (ASO, siRNA), Synthesized Oligos (Product (Primers, Probes)), Type ((Custom,…
This website uses cookies.