Average Power Use Per Server

We have been tracking this topic the inception of Vertatique and it is consistently our most-Googles post. We just updated it to better present the material and add newer information..

The ars technica folks published this informative breakdown of server power consumption in 2007, credited to "Intel and EXP Critical Facilities".

Jonathan Koomey's landmark 2007 analysis of global computing came up with three averages, based on server class - volume: 183W, mid-range: 423W high-end: 4,874W. Volume servers weigh heavily in Koomey's 2005 census, which pushed his average down to 257W.

A 2009 IBM analysis uses 425W for Power Usage at Average Load.

IBM Systems Magazine offered these guidelines in 2011:

Categorically, commodity x86 servers can be estimated reasonably. Average typical power consumption for servers ranges in the following categories:
1U rackmount x86: 300 W-350 W
2U rackmount, 2 socket x86: 350 W-400 W
4U rackmount, 4 socket x86: average 600 W, heavy configurations 1000 W
Blades: average chassis uses 4500 W; divide by number of blades per chassis (IBM BladeCenter* H is 14 per chassis, so 320 per blade server)
To estimate within these ranges, consider that electrical consumption increases with higher clock-speed CPUs, larger numbers of memory cards such as DIMMs and physical disks, and with greater processor utilization.
Servers that don’t fall into one of the listed categories can have typical power estimated by multiplying the nameplate rating by 70 percent. This estimation is reasonable only for a large population of servers, such as a whole data center. It’s not accurate with any granularity, and certainly not at the single-server level.

All these numbers are for the servers themselves. Assuming a PUE of 2.0, the total datacenter energy consumption per server is double these numbers.

The equation is being shifted by new technologies, ranging from small-scale servers that typically run under 100W to new volume servers that offer 500+ CPUs consuming less than 2kW in total.

Server running costs

Thanks for the information. Our engineers have a rough rule of thumb, that it costs double the running costs of a server to extract the heat. I have used the information here and that rule of thumb to create a virtualisation calculator.

Check it out here:

Rated Vs. Actual Power Consumption

The actual power consumption of a computer during normal use can be below that of a its manufacturer's rating. Alex Bischoff of open4energy measured the actual power consumed by his laptop over a week. Its average consumption of ~30W watts was 46% of the 65W rating of the unit's power supply.

This prompted me to take another look at Koomey. He factors servers who maximum measured power 25%- 66%, depending server size, to get typical power use. This reinforces the importance of actually measuring our real-world consumption.

Energy cost to run a server

Here's a summary of an email discussion on the use of average server consumption numbers in estimating virtualization benefits.

Using Koomey, a 427W server running 24/7 would directly consume 3741 KWH of electricity annually, or ~$400.00 at the USA commercial pricing average of $0.107/KWH. Factoring in the cooling load at 1X (2.0 PUE) puts the annual energy cost per server at ~$800.

The $400 figure shows up in a Gartner press release, where an ambiguous use has caused some confusion.

The release starts off by saying, "For example, removing a single x86 server will result in savings of more than $400 a year in energy costs alone." This assumes no significant offsetting increase from something else installed to replace that server's function, which is unlikely.

The release continues, "server rationalization will lower energy costs, typically more than $400 per server, per year." I imagine the author meant that total energy server cost is $400 (unnecessary repeat of above statement), but it can be read as implying that reduced energy costs from rationalization are $400 per server, per year.

So how might we calculate the energy impact of server rationalization? Let's assume we replace ten 427W servers with one 1030W virtualization server and use a PUE of 2.0. 8540W gets reduced to 2060W, saving 6480W or 76%. So Gartner's implication that removing a single server will save 100% of its energy costs would apply to very few rationalization scenarios.

These illustrations are based on equipment specifications and industry averages. As always, actual power loads are a function of real-world equipment configurations and operating conditions. It's critical to establish a baseline before planning a server rationalization project.

Thanks to open4energy for alerting me to the potential confusion with the Gartner release.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.