On-Prem AI Needs Fast Storage

Daniel Sarosi
CEO, Great Wall Connect
AIoT Expert of Industry, Textile
Ex IBM Staff Software Engineer(R&D)
Bachelor, Computer Science, Western University, London

In this interview, Daniel Sarosi explains why the shift from cloud AI to on-prem AI is accelerating.

As more companies move AI workloads in-house to avoid cloud cost uncertainty and data risks, a new bottleneck emerges: storage I/O. Large language models require rapid loading, swapping, and parallel execution — and without high-performance storage, even the best GPUs will sit idle.

Daniel argues that the future of on-prem AI will depend not only on compute and memory, but on storage systems fast enough to keep up with multi-agent, multi-model workflows.