The internet is a huge network of interconnected computers. Hundreds of thousands of servers quickly store, locate, and retrieve data, sometimes from halfway around the world, and usually thousands of times per second.
This requires massive amounts of power, greater than 1% of the energy output of the entire world. For example, Google has about 36 data centers, housing thousands of servers. One of these, in Oregon, requires over 100 megawatts [MW] of electrical power.
All this computing power generates a lot of heat, so, in order to run efficiently and reliably, data centers use air-conditioning to keep the temperature around 21 °C [70 °F]. Going back to the example of the Google data center in Oregon, one of the most efficient in the world, of the 100 MW total power consumption, 16.7 MW is used for air-conditioning.
A new study done by researchers at University of Toronto Scarborough [UTSC] suggests that the thermostats might stand to be turned up a little.
Using data from centers run by Google, Los Alamos National Labs, and others, as well as data from testing done in their own labs at UTSC, researchers found little correlation between temperature and system performance. The data suggested that moderate increases in temperature had little or no impact on reliability.
In a paper titled “Temperature Management in Data Centers: Why Some (Might) Like It Hot,” Bianca Schroeder, a UTSC assistant professor of computer science, and her colleagues described how an increase of just 1 °C could save up to 5% of total energy consumption, and that some data centers could increase the temperature even more without sacrificing anything but electricity costs. “Most organizations could run their data centers hotter than they currently are without making significant sacrifices in system reliability,” said Schroeder.