Feature

Text, Font

SCROLL

How Flexible Is Your Data Center Cooling?

The steady upward trend of equipment power density in recent years may soon exceed the limitations of traditional cooling methods.

When designing data center cooling systems, one important factor is the power density within each IT cabinet enclosure. The steady upward trend of equipment power density in recent years may soon exceed the limitations of traditional cooling methods. With average and peak values consistently increasing, densities may soon approach levels that necessitate nontraditional cooling systems. Data center developers and designers should be exploring high-density cooling options to accommodate these trends, especially those providers whose potential tenants may require provisions to support high-density cooling in new lease agreements.

Some high-density cooling solutions require significant infrastructure modifications to accommodate; however, there are many systems that can be implemented with minimal or no added cost. These solutions provide owners with the flexibility to accommodate high-density loads as needed without the capital costs required for infrastructure upgrades. This article will review available high-density cooling technologies, the limitations of each, and modifications owners and designers can incorporate to accommodate them.

Determining When to Implement High-Density Solutions

Before determining which cooling equipment to use, designers must identify when IT equipment power density is considered “high” and when high density means accommodations are needed. The Uptime Institute 2020 Data Center Industry Survey found the industry average rack density to be 8.4 kW with peak densities ranging from 10-19 kW. While the definition of high density varies, it is generally accepted to apply it to rack densities of 20 kW or more.

A single high-density rack is unlikely to require significant modifications to the data center cooling system, as are multiple high-density racks when located in remotely from one another. Localized hot spots created by high-density racks can typically be improved by proper floor tile management or containment strategies rather than a change in cooling technology. Larger HVAC cooling system modifications are generally required when multiple high-density racks are located in close proximity, resulting in insufficient air distribution or a local lack of HVAC capacity.

Evaluating Cooling Technology Options: Traditional Air-Cooled Equipment, Close Coupled Air-Cooled Equipment, and Liquid-Cooled Equipment

Traditional data center cooling systems supply cool air to the front face of the IT equipment racks from remote-mounted air-handling equipment. The air is pulled by the IT equipment through the rack, where the generated heat is removed before the air is returned to the HVAC system. The amount of air the IT equipment requires fluctuates based on the temperatures measured within the equipment. As the compute load rises, one of two things will occur: Either the IT fans will ramp up to provide more airflow to reject the added heat, or the temperature of the air leaving the equipment will increase. If equipment temperatures increase beyond their maximum set point, the equipment will deactivate to prevent damage. This deactivation is controlled by manufacturer-provided, proprietary cooling control algorithms that typically cannot be manipulated. Simply put, there are limitations on IT airflow and temperature differentials that are unknown to data center designers and vary by equipment.

To accommodate the unknown IT equipment cooling requirements, designers use generally accepted industry standards for sizing HVAC equipment. Traditional data center cooling designs assume an average leaving air temperature and temperature differential across the IT equipment. Traditional cooling systems often mitigate the impact of high-density loads by reducing the supply air temperature; however, this consumes more HVAC energy. This results in a practical limit to what can be air-cooled without significant energy costs. The traditional perimeter-based cooling approach that provides cool air into an underfloor plenum or directly into the space can support average rack densities of approximately 10-12 kW with proper containment and control strategies. An occasional high-density rack will not require HVAC changes, but traditional cooling technologies cannot support increased high-density equipment power.

Blue, Lighting, Architecture, Electricity, Fixture

FIGURES 1A & 1B: Examples of rear door heat exchangers.
Images courtesy of HED

Computer cluster

Close-coupled cooling options support higher rack densities while still using standard air-cooled IT equipment. These technologies are designed to support larger IT equipment temperature differentials and can increase local rack airflow without the energy penalties incurred by traditional systems. By moving the cooling equipment closer to the heat load, a reduction in fan power and more precise rack level control can be provided. Close-coupled equipment requires water or refrigerant lines to provide heat rejection from the units located on the data hall floor. Rear door heat exchangers, in-row, and above-rack coolers are available from multiple manufacturers with a wide range of available options to accommodate unique cooling challenges. When operating with proper air and water conditions, this type of equipment can support rack densities of 50 kW or more.

Row- and rack-based cooling solutions require a connection to a chilled or condenser water heat rejection system to remove the heat generated by the IT equipment. See Figures 1A&B for examples of rear door units with underfloor and overhead pipe installations. Depending on the operating water temperatures, water quality, and pumping control strategy, an additional piece of equipment called a cooling distribution unit (CDU) may be required. A CDU provides a physical separation between the facility heat rejection loop and the water distributed to the racks. A CDU may contain a heat exchanger, pumps, and controls for the connected equipment. Where a water-based heat rejection system is impractical or not available, manufacturers offer direct expansion CDU options.

Legacy facilities with chilled or condenser water systems can be expanded to accommodate rack- and row-based cooling systems by adding taps and valves in appropriate locations. Traditional direct expansion cooling units may require the addition of a water-based cooling system or utilize CDUs with direct expansion heat rejection to implement in existing facilities.

There are two main forms of liquid-based cooling in data centers: conductive and immersive. Both cooling solutions can support equipment densities of 100 kW per rack or higher.

Conductive solutions, also referred to as direct liquid cooling (DLC), transfer heat directly from the compute processing equipment to a cold plate mounted on the IT equipment. The cooling fluid is circulated through the cold plate, which prevents direct contact of the fluid and the IT equipment. There are solutions ranging from pre-piped, fully populated racks to individual servers that are liquid-cooled ready. Standard IT equipment not originally intended for liquid cooling may also have the potential to retrofit cold plates.

Most DLC systems require a combination of heat rejection via traditional air-cooled methods and direct liquid cooling. Only a portion of the IT equipment components reject heat to the liquid system, so the remainder of the load must be cooled via air. The remaining air-cooled load typically represents 20%-50% of the overall rack cooling load and is cooled via traditional cooling methods. The traditional cooling system and direct liquid cooling system work in parallel to provide the required high-density cooling in a form that is familiar to data center operators. See Figure 2 for a schematic operation of a hybrid air- and water-cooled system. Similar to the close coupled air-cooled solutions, DLC equipment can be easily incorporated into an existing chilled or condenser water system. Legacy facilities with direct expansion, perimeter-mounted computer room air conditioning (CRAC) units will require the addition of a water-based cooling system.

Immersive cooling technologies circulate a mineral oil or dielectric fluid throughout the IT equipment to provide heat rejection. This technology places the IT equipment directly in contact with the cooling fluid. IT equipment must be specifically designed for this type of cooling approach — standard IT equipment does not provide this functionality without modification.

Immersive cooling represents the biggest departure from traditional methods as a potential solution to increased equipment power density. The equipment form factor, operation, and maintenance requirements all differ from traditional systems. Immersive cooling presents many advantages over traditional cooling equipment, but until it is more widely utilized, it will continue to be perceived as a specialty cooling system application.

Liquid-based cooling systems can reject more heat per kW of cooling than their air-cooled counterparts because water provides 3,500 times the heat-carrying capacity of air. Water temperatures utilized in water-cooled equipment solutions may not require the use of compressorized mechanical equipment, further reducing capital costs and operating expenses. Although this results in highly efficient cooling systems, many data center developers have been slow to adopt these technologies due to the nonstandard IT equipment required.

Rectangle, Product, Slope, Font

FIGURE 2: An example of a hybrid air- and water-cooled system.

Deciding Which Cooling Technology to Use and When

At this time, any of these technologies could be the right solution when the various pros and cons are considered. Air-cooled IT equipment remains the primary system of choice due to its widespread adoption, diverse manufacturer options, and operator familiarity. The IT and cooling systems required to support air-cooled equipment are easy to install and remove with minimal disruption. Currently, close coupled air cooling has the ability to support multiple high-density racks in close quarters, but, as technology advances and new equipment is developed, liquid-based cooling as a solution for increased power density will warrant a closer look.

Data center owners and designers understand higher equipment densities are coming; however, the timeline for mass adoption of the technology remains unclear. Current data center designs should provide enough flexibility to accommodate a portion of the IT load with high-density equipment. Recently, there has been a shift toward systems that provide inherent flexibility for high-density loads. Liquid-based systems, such as chilled and condenser water, can easily be adapted to accommodate specialized equipment for future high-density cooling equipment, often with minimal or no added capital cost.

Advances in IT equipment utilization, artificial intelligence, data mining, augmented reality, and machine learning are all driving the continued upward trend in rack density. As conventional cooling systems approach the practical limits of operation, data centers must have the flexibility to incorporate high-density cooling solutions within current facility designs while maintaining reliability and redundancy. While the industry overall has not yet reached the need for specialized high-density equipment, compute-intensive workloads are continuing to increase, and data center designers, developers, and operators should be prepared to accommodate the cooling systems required in their next generation of facilities.

Dress shirt, Smile, Product, Rectangle, Sleeve, Gesture, Collar, Line

Martin Herbert, P.E., LEED AP BD+C,
Martin Herbert, P.E., LEED AP BD+C, is an associate principal with HED and leads the firm’s mission critical mechanical engineering group. He has provided design solutions for some of the world’s largest data center providers as well as for enterprise data center customers in the financial, pharmaceutical, and retail industries. He has an extensive knowledge of sustainable design practices and has worked on multiple Uptime-certified and LEED Gold- and Silver-certified projects. He is a graduate of the University of Texas, where he received his degree in architectural engineering.


Image courtesy of Pexels