Average Power Use Per Server
We have been tracking this topic the inception of Vertatique and it is consistently our most-Googles post. We just updated it to better present the material and add newer information..
The ars technica folks published this informative breakdown of server power consumption in 2007, credited to "Intel and EXP Critical Facilities".
Jonathan Koomey's landmark 2007 analysis of global computing came up with three averages, based on server class - volume: 183W, mid-range: 423W high-end: 4,874W. Volume servers weigh heavily in Koomey's 2005 census, which pushed his average down to 257W.
A 2009 IBM analysis uses 425W for Power Usage at Average Load.
IBM Systems Magazine offered these guidelines in 2011:
Categorically, commodity x86 servers can be estimated reasonably. Average typical power consumption for servers ranges in the following categories:
1U rackmount x86: 300 W-350 W
2U rackmount, 2 socket x86: 350 W-400 W
4U rackmount, 4 socket x86: average 600 W, heavy configurations 1000 W
Blades: average chassis uses 4500 W; divide by number of blades per chassis (IBM BladeCenter* H is 14 per chassis, so 320 per blade server)
To estimate within these ranges, consider that electrical consumption increases with higher clock-speed CPUs, larger numbers of memory cards such as DIMMs and physical disks, and with greater processor utilization.
Servers that don’t fall into one of the listed categories can have typical power estimated by multiplying the nameplate rating by 70 percent. This estimation is reasonable only for a large population of servers, such as a whole data center. It’s not accurate with any granularity, and certainly not at the single-server level.
All these numbers are for the servers themselves. Assuming a PUE of 2.0, the total datacenter energy consumption per server is double these numbers.
The equation is being shifted by new technologies, ranging from small-scale servers that typically run under 100W to new volume servers that offer 500+ CPUs consuming less than 2kW in total.