The Definition of Insanity – Keep Doing the Same Thing Over & Over Expecting the Results to Change!

It’s time for power people to change how they think about power in the data center. I recently read an updated white paper on Density in the Data Center from Server Technology that highlights how density in the data center continues to climb alongside ever-increasing loads on compute infrastructure.

Well, that makes sense considering the rapid pace of technology advancements since 2014, when the paper was originally authored. But wouldn’t you expect major improvements in rack density, power allocation and power efficiency since then?

Our customer conversations reflect a mounting frustration that power optimization seems to be the final frontier. Too often, enterprises and hyperscale service providers over-design, over-provision and over-spend on power infrastructures to ensure there’s always sufficient capacity. But that doesn’t help drive data center automation and efficient energy utilization.

It’s time to change the antiquated thinking about power in the data center as it’s stuck in the 20th century. What’s needed is a modern mindset and 21st century model leveraging virtualization to increase operational efficiencies, reduce capital expenditures and improve business agility. Look no further than the server virtualization strategy created by VMware for a glimpse into how this approach stands to benefit power in the data center.


Then ask yourself why is every other layer of the compute stack virtualized except for power? Operating systems, desktops, apps, servers, storage and networks all have been virtualized for years yet power has remained a costly and complex piece of the data center puzzle. In fact, “virtualize first” is the credo of practically every data center. To achieve the full value of virtualization, power needs to be a key component of any virtualization strategy.

As we’ve said before, power is the fourth and final pillar in an overarching Software Defined Data Center (SDDC) strategy.

Until power is added to the party, SDDC falls short of helping operators and providers take full advantage of the benefits that have been promised.

So what’s the holdup? Is there a data center anywhere that doesn’t need to reduce capital and operating expenses? There’s not a data center operator or provider we’ve spoken with that wants to continue paying for, allocating and being hogtied to managing 15 kW per rack when workloads are running at less than 5 kW. None of them want to pay up to three times more for the amount of power they are actually using.

Luckily, more and more power and IT people are starting to think and act differently. When power and cooling consume approximately 40% of a data center’s operating budget, it’s hard to swallow status quo. There’ are other options to consider. Take Software Defined Power, for example, which automatically identifies, aggregates and pools all sources of stranded power within a data center and routes energy on-demand to racks, nodes, workloads or circuits in real time. Using machine learning and predictive analytics, cutting-edge SDP platforms reallocate power according to capacity and availability demands while reducing the cost of data centers’ capital and operational expenses by up to 50% annually. 

Thanks to SDP, technology is no longer the biggest impediment to optimizing power in the data center. The remaining hurdle is changing decades-old behaviors and thought processes so power people and the IT teams they support step up and  place a priority on power virtualization.

After all, the demands for data center efficiency are only going to intensify as power-hungry megatrends, such as  IoT, smart cities and Artificial Intelligence (AI), create and fuel new innovations in the digital age. Why wait until it’s too late when SDP solutions are on the market now? As with any transformational change, power virtualization is a decision that needs to happen sooner than later.