Many data-center workloads staying on premises, Uptime Institute finds

Another study finds that the data center is far from dying. That’s not surprising to learn from the Uptime Institute’s annual data center survey. However one trend that did stand out in the research is that power efficiency has “flatlined” in recent years.

Uptime says big improvements in energy efficiency were achieved between 2007 and 2013 using mostly inexpensive or easy methods, such as simple air containment. But moving beyond those gains involves more difficult or expensive changes. Since 2013, improvements in power usage effectiveness (PUE) have been marginal, according to the group.

Uptime puts the blame on ever-increasing power density. “There have been industry warnings about a meteoric rise in IT equipment rack power density for the past decade (at least),” the report states. “One reason for this prediction is the proliferation of compute-intensive workloads (e.g., AI, IoT, cryptocurrencies, and augmented and virtual reality), all of which drive the need for high-density racks. Our 2018 and 2019 surveys found that racks with densities of 20 kW and higher are becoming a reality for many data centers.”

At 20 kW per rack, air cooling is no longer viable, and direct liquid cooling and precision air cooling become more economical and efficient. The report went on to say such high densities are not pervasive enough to have an impact on most data centers, but the trend should not be ignored, either, because the average mean rack power density in data centers is rising steadily, from 2.4 kW per rack in 2011 to 8.4 kW per rack in 2019.

Another issue facing data center operators is that power disruptions have contributed to more and bigger data center outages. “It is clear that outages occur with disturbing frequency, that bigger outages are becoming more damaging and expensive, and that what has been gained in improved processes and engineering has been partially offset by the challenges of maintaining ever more complex systems,” the report states.

The survey also found operators are keeping hardware longer. In 2015, the typical refresh cycle was three years. In 2020, it had extended to five years. That’s likely going to continue amid COVID-19 uncertainty.

Data center still king as cloud adoption climbs

The survey found that on-premises infrastructure is “neither dead nor dying” despite the trend toward the public clouds. It concludes “the enterprise-owned datacenter sector, while not necessarily the most innovative, will continue to be the foundation of enterprise IT for the next decade.”

In probing how data centers are being used by enterprises, the survey found 58% of those surveyed said most workloads remain in corporate data centers, a total that is projected to decline to 54% over the next two years. Sixteen percent said they are using colocation facilities, while 12% of workloads are expected to migrate to the public cloud by 2022.

So why are enterprises so skittish on the cloud? According to the survey, a lack of visibility, transparency and accountability of public cloud services is a major issue for enterprises with mission-critical applications. Enterprises want more visibility into how cloud operators manage their operations. If they had that visibility, they would be more likely to use a public cloud.

Just 17% say their organization has adequate visibility and they place mission-critical workloads into a public cloud. Another 10% say they do not have enough visibility, but they still place critical workloads in the cloud. Most importantly, 21% say they would be more likely to run mission-critical workloads in a public cloud if there were a higher level of visibility into the operational resiliency of the service.

Leave a Reply

Your email address will not be published. Required fields are marked *