Data Centres Aren’t Always Maxed Out — Here’s How to Fill the Gaps
Most people think data centres run flat-out, 24/7. They don’t. As Tyler H. Norris points out on the Power & Policy blog, the common mix-up between “load factor” and “capacity utilization” makes it look like facilities run near 100% of their limit. In reality, redundancy, maintenance, and spiky workloads keep them well below peak. That gap matters. It shapes how we build infrastructure and how we can use energy more wisely.

1. Turning Idle Capacity into Value
Navon is rethinking the model. Instead of letting servers sit idle, we colocate flexible compute — like Bitcoin mining — alongside AI and HPC workloads. This pushes Infrastructure Usage Effectiveness (IUE) closer to 100%, keeping power and cooling systems productive instead of underused.
2. Balancing Spiky AI/HPC Workloads
AI training and inference loads aren’t steady. They spike, pause, and swing with traffic. In one NVIDIA-backed trial, Emerald AI showed GPU clusters could flex power by 25% during grid stress events without harming performance. Navon builds on this idea: flexible compute fills the troughs and eases back when clusters surge, smoothing demand for both the grid and the facility.
3. Monetizing Flexibility and Supporting the Grid
When power is cheap or renewables are abundant, flexible compute scales up. When prices rise or the grid needs relief, it scales down. This creates new revenue streams while helping stabilize the grid — something traditional colocation rarely does.
The Takeaway
The myth of the “flat 90% load factor” hides opportunity. True utilization is much lower. Navon’s hybrid model treats underuse as an asset, combining Tier III reliability for AI clients with flexible loads that monetize slack and support grid stability. The future isn’t about running data centres at max all the time. It’s about running them smart.
