Home
News
Products
Corporate
Contact
 
Friday, May 17, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

How Data Centers Can Become Greener


Tuesday, August 22, 2023

Data centers use significant amounts of electricity to power their thousands of servers. From the location of a data center to the placement of server racks, there are several actions that data center managers can take to improve the power usage effectiveness (PUE) of the data center.

The PUE of a data center is defined as the total amount of power delivered to the data center, divided by the amount of power used by the IT components. The lower the value, the more energy efficient the data center is. Of course, sourcing renewable power is an obvious first step. Still, other methods, such as increasing air inlet temperatures, optimizing power delivery, and utilizing the right system at the right time, can contribute to a greener data center.

Operate at higher temperatures

When using traditional air cooling mechanisms, the air entering the server (inlet temperature) is maintained by Computer Room Air Conditioning (CRAC). How air conditioning is used in a data center contributes the most to the PUE calculation. Reducing the amount of air conditioning significantly lowers the PUE and, thus, OPEX costs. Around the world, many data centers are keeping inlet temperatures too low. Data center operators can reduce power usage by increasing the inlet temperatures to the manufacturer’s recommended maximum value. Looking at the results from a recent survey of over 400 IT professionals and data center managers, there is a wide range of inlet temperatures, which indicates that most IT administrators are limiting the inlet temperature to less than the manufacturer’s “highest” limit.

Capture heat at the source

CRAC is the most significant variable to optimize to lower overall PUE. The PUE of a data center can be significantly reduced when using liquid cooling solutions in particular. While the data center infrastructure may need to be modified or added to, the longer term OPEX savings will outweigh the initial costs.

Liquid Cooling

Liquid cooling of the CPUs and GPUs can significantly reduce the need for having CRAC units in data centers and the need to push air around. There are several different methods to use liquid cooling to reduce the need for forced air cooling.

Direct To Chip (DTC or D2C) Cooling

This method passes a cold liquid over the hot CPU or GPU. Since a liquid is much more efficient at removing and transporting heat than air is, the CPU or GPU can be kept within its thermal design power (TDP) envelope. This can lead to significant savings when scaled across thousands of systems in a medium to a large data center. Server with D2C Liquid Cooling installed.

Rear Door Heat Exchanger (RDHx)

The rear door of the rack contains liquid and fans, which cools the hot server exhaust air before the air enters the data center. The hot liquid needs to be cooled before it is returned to the data center CRAC. This liquid cooling method keeps the air at a lower temperature in the data center, reducing cooling demands on the CRAC, which will lessen the amount of electricity needed in the data center.

Immersion Cooling

With immersion cooling, the entire server – or groups of servers – are submerged in a dielectric liquid. The close contact of the liquid molecules with the hot CPUs, GPUs, and other components is an efficient way to cool the servers, as fans will need to be removed from the servers. Some minor modifications must be made to the server before immersion. An entire rack of servers can be cooled in this manner.

Immersion cooling of complete servers.

Hot and Cold Aisles

A significant amount of electricity can be saved using the CRAC if the hot and cold aisles are separated in the data center. When designed with hot and cold aisles, the inlet and exhaust air should not mix, allowing the data center cooling to operate more efficiently. For adequate cooling, the rows of racks need to be installed so that the rear of the racks face each other, creating a hot aisle. Therefore, an important best practice when designing an energy-efficient data center is to have hot and cold aisles.

Hot and cold aisles in a data center.

Optimize power delivery

Power conversion from AC to DC entails some amount of heat generated. With AC being delivered to the data center, the power must be converted to DC for the system. With each conversion, energy is lost, contributing to the inefficiency of the data center. More efficient conversion will result in less wasted power during the conversion, with heat being the by-product that must be removed from the system. Titanium power supplies are the most efficient option, offering 96% power efficiency. Platinum power supplies are slightly less efficient at 94%. Gold power supplies offer a lower efficiency of 92%. The efficiency of a power supply isn’t linear or flat when it comes to the supply’s output range. Most power supplies operate at their maximum efficiency when running in the upper ranges of their rated capacity. This means that an 800-watt power supply providing 400 watts of power (50% capacity) will be less efficient than a 500-watt power supply providing that same 400 watts of output power (80% capacity).

Source green energy

A data center’s energy source has the most significant impact on its carbon footprint and poses the most substantial opportunity to benefit the environment. Renewable energy programs for commercial customers include generation through utility, third-party power purchase agreements (PPA), or renewable energy credits (REC). Distributed renewable energy production owned or controlled by data centers is optimal. But on-site renewable energy sources do not always satisfy data center energy demands. Fortunately, clean grid energy can augment this. There are also increasingly effective energy storage solutions for deployment on-site, coming down in cost as battery technology improves and scales.

Rethink site selection criteria

Large-scale data centers cost a lot of money to operate. For example, a single hyper-scale data center can demand 100 MW of power to keep servers, storage, and networking infrastructure performing as expected (enough to power 80,000 US households). In addition, while electronics use most of the energy consumed in a data center, cooling those electronics to maintain operating temperatures can consume 40% of facility energy.

Building costs consist of the land value as well as the cost of construction. Construction prices vary depending on the geography or region. Unlike building a home or an office building, a data center’s location has some unique requirements to be considered “green” and deliver agreed-upon Service Level Agreements (SLAs). Factors such as climate, energy pricing, risk of natural disasters, water costs, and the cost of network bandwidth all contribute to the choice of data center locations.

Data centers are critical to the world’s economy. Many aspects of modern life depend on the data center, which consumes more electricity than ever before to deliver the services everyone uses. While the work per watt of the CPU continues to increase, there is a need to reduce the overall data center power consumption. There are several actions that data center operators can take. These include running systems at warmer temperatures, configuring the data center with hot and cold aisles, and sourcing green energy. Data centers can reduce their PUE by taking just a few steps, lowering their operating expenses and decreasing their CO2 footprint for years to come.

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved