A recent article in Data Center Frontier outlined the top 10 concerns of data center managers in 2019. No surprise, these same concerns were considered hot topics dating back to 2015 and perhaps even earlier.

The simple reality is this: Today’s data centers need to be architected differently to deliver the efficiency, agility and capabilities demanded by hyperscale and hypergrowth environments. Also true: The blazing fast pace of edge and cloud computing has rendered traditional data center models obsolete.

Customers continually tell me they want to take a more modern approach to data center construction and management, especially when it comes to power. Not only is it nearly impossible to provision power at the edge using traditional methods, typical data center financial models can’t compete with cloud service providers. So, it’s unrealistic to minimize OPEX and CAPEX without adding software virtualization into the infrastructure wherever possible, including the power plane.

By doing so, on average, organizations could free up to 50% of their power, which otherwise is stranded or trapped and unable to be used efficiently. Too often, this problem is compounded by over-provisioning power capacity, upending ROI models while risking going into the red. At scale, things really get dicey, as the addition of new workloads and hardware bring their own set of demands on data centers and networks.

New workloads and hardware have been designed from the ground up to sustain failures without causing the data center to go offline. Enterprise-grade resiliency and fault-tolerant features, along with best practices for data backup and recovery, ensure that applications can continue to run without experiencing downtime or data loss should a disruption occur.

Sadly, many power experts, as well as power hardware vendors, don’t embrace this mindset. Therefore, they continue to heap on expansive layers of power and over-provision capacity instead of using virtualization and artificial intelligence to forecast more accurately and automatically reallocate power distribution based on capacity and availability demands.

Data center builders, operators and managers need to think differently. It’s time to set new standards that take advantage of software virtualization to better address data center growth while alleviating power-capacity constraints. Let’s face it, power, cooling and lighting are among the standard setters, as they’re among the first areas considered during data center buildout or expansion planning. The parameters set for these critical infrastructure areas dictate many decisions that follow, which is why it’s so important to look for solutions that support on-demand expansion, as needed. All the decisions that follow—from which equipment goes into which rack to how much air, cooling and electricity is needed at the rack, node, workload or circuit level—must align.

Every data center I’ve visited over the past few months is facing capacity challenges. Managers lament they were forced to place huge, eight-figure bets on power CAPEX and seven-figure bets on power OPEX many months before initial use. In addition, they face the reality of years going by before breakeven—let alone profitability—could be achieved.

So why gamble on something as fundamental to data center operations as power? It doesn’t make sense. Frankly, playing data center roulette is something you want to avoid because sooner or later, you will lose. The winning bet is one in which software virtualization is used to increase power capacity and resiliency within existing IT footprints. I’ll take that wager every time because I know it can lead to a bump in power capacity of up to 40% without any compromise to data-center or system availability.