Skip to content

IT Operations Green Strategy reducing cooling requirements

September 21, 2008

Executive Summary

Rising energy costs and new corporate strategies targeting “green” initiatives are causing enterprises to scrutinize data center power strategies. Optimizing data center cooling presents the single largest area of opportunity for IT to save energy.
This is the second post in a three-part series that focuses on improving energy efficiency in the data center and reducing the environmental impact of IT operations. Key topics include:
» The projected growth in cooling capacity as a result of high-density server racks. » Discussion of the cooling burden factor.
» Best practices such as cool aisle/hot aisle layouts that lower cooling requirements and save energy.
“Green” is emerging as the corporate buzzword for 2008. The enterprise data center is a key area of focus, with vendors, governments, analysts, environmentalists, and even utility companies all on board when it comes to maximizing energy efficiency.

Optimization

Rising energy costs and new corporate strategies targeting “green” initiatives are causing enterprises to scrutinize data center power strategies. Optimizing data center cooling presents the single largest area of opportunity for IT to save energy. Due to the increasing power density and heat generation of newer equipment, cooling and air conditioning related energy costs now surpass the cost of powering servers

Key Considerations

Future data centers will require considerably more power and cooling capacity than is available in most existing environments. Next generation data centers tend to be consolidated centers – servers and processing that were distributed in multiple centers are now in one place. Also, computing requirements for enterprises small and large are growing; meaning more processing power, more servers, more energy consumption, and more heat production. While advancements in technology have allowed for greater processing in smaller components, corresponding innovations in energy efficiency and cooling have been slow to follow.

Cooling Capacity Requirements Growing with Higher Power Density
Data centers three or more years old were designed based on an assumption of a heat load in the range of 40 to 70 watts per square foot. These data center environments are not equipped to handle heat loads being generated by many of the newer high density racks (such as blades) that can boost power density to 200 watts per square foot or more (or over 400 watts per square foot for the racks themselves).
Projections from vendors such as Cisco suggest that the future cooling capacity required for extremely high density environments could be even higher. According to HP, data center power density is increasing at a rate of 15% to 20% per year.
What’s more, the increased power density of modern server racks causes server room temperatures to rise 10 times faster than previously experienced. This means that IT decision makers must place increased importance on reliable uninterruptible cooling systems

Calculating the Cooling Demand

According to estimates from American Power Conversion (APC), each watt of electricity consumed by IT infrastructure carries an associated “burden factor” of between 1.2 watts and 1.7 watts of power consumption associated with cooling. Energy cost figures from the Lawrence Berkeley National Laboratory (Berkeley Lab) suggest that the average high-density server rack drawing 20 kW of power consumes more than $17,000 per year in electricity (not including air conditioning). Alternatively, HP Labs publishes an estimate of $11,000 per year in electricity for a 13 kW server rack. Using the APC burden factor, these estimates translate into an annual cooling cost of anywhere from $13,200 to $28,900 per high-density server rack.    Cool Hard Facts Worst case scenarios and poor air distribution can result in even higher costs. In a detailed study of 19 large data centers, The Uptime Institute found that on average only 40% of the cold air supplied to the data center was actually going to cool servers–the rest was wasted. While the Institute suggested a target ratio of 1.25 watts of cooling capacity per watt of heat dissipated from equipment, actual case study data suggested that many data centers were operating with a cooling ratio of closer to 2.6 watts

The average high density server rack can cost anywhere between $13,200 and $28,900 per year for cooling alone.

Some Methods Lead to Waste

Although power and cooling have certainly been top of mind among IT professionals, the methods used to cope with power and cooling issues have been generally short sighted . Although 50% of IT shops have been consolidating servers or optimizing heat distribution, two-thirds of those polled solved the problem by increasing data center size or by throwing more power at the problem (see Figure 4). Both of these latter strategies are solutions that can quickly snowball out of control and lead to unnecessary spending.

Before purchasing more power or expanding into larger facilities, enterprises should assess all opportunities for improving airflow management and reducing the heat emissions of the existing environment.
Improvement & Optimization
Air distribution is key when it comes to efficient cooling. Proper data center air management minimizes the mixing of hot air emitted from equipment and the cooling air supplied to it. Efficient air distribution can help increase the density capacity of the data center (watts/sq. ft), reduce heat-related malfunctions, and lower operating costs.

Find the Right Solution
Tailor the data center cooling solution based on the current and projected power per server rack and type of infrastructure used. For example, a rack housing 21 2U dual core servers might only draw six or seven kW of power, whereas a high performance, high density blade rack can draw 16 to 20 kW of power. Each scenario requires a different cooling strategy. Many newer racks/servers will come with built-in cooling innovations, making the decision much easier for IT leaders.

Power per Rack (kW)        Cooling Strategy

0 to 5        Vent rack exhausts. Install closed cabinets and blanks to prevent recirculation.

5 to 10        More than 2 perforated tiles needed per rack. Consider Spot Cooling (i.e. portable air-conditioning that can be placed where it’s needed most).

10 to 15        Cool aisle/hot aisle layout with fully ducted exhaust into hot aisle to prevent recirculation.

15 to 20        Consider in-row cooling or liquid cooling racks.

Cool Aisle/Hot Aisle

Pacific Gas and Energy (PG&E) suggests that poor airflow management can decrease the cooling capacity of a Computer Room Air Conditioning (CRAC) unit by 50% or more. A recognized approach to efficient air handling and air-flow management is implementing a cool aisle/hot aisle layout. This model is optimized when airflows are separated by enclosing aisles, thus allowing for more effective cooling. Rows of racks are arranged such that there are alternating cold (air intake) and hot (heat exhaust) aisles between them.
When implementing a cool aisle/hot aisle layout, consider the following:
» Obstructions to plenum. Since the under-floor and overhead plenums are both air duct and wiring space, make sure airflow is not being blocked by wiring trays, light fixtures, or conduits.
» Short-circuiting airflows. The term “short circuiting” refers to the mixing of hot exhaust air and cool intake air. Make sure cable cutouts and holes in the perimeter walls and the raised floors are properly sealed to prevent this. Also, use blanking panels to fill unused rack space and prevent air mixing.
» Ventilated racks. Specialized/custom rack products take cool aisle/hot aisle setups to the next level by supplying cool air directly to the rack air intake and drawing hot air out without letting it travel through the room at all.
» No raised floor. For smaller data centers that lack a raised floor, the same principles of enclosed hot and cool aisles can be used, but some cooling efficiency will be compromised.
» For up-flow units. CRAC units should be located near the end of a hot aisle and fitted with ducts that bring cool air to points above cool aisles as far away as possible from the CRAC.
» For down-flow units. CRAC units should be oriented to blow air down a cool aisle. Return vents should be located over the hot aisles using a dropped-ceiling return air plenum (or hanging ductwork return).
» Inappropriately oriented rack exhausts. All rack equipment must have a front-to-back airflow in order to fit into a cool aisle/hot aisle model.

Liquid Cooling

This is an old technology from mainframes that is coming back in vogue again as servers test the limits of CRAC units. With liquid cooling, waste heat is transferred to a liquid (water) very close to the source. For this reason, it is significantly more efficient than traditional air cooling where waste heat diffuses into the surrounding air and the entire room needs to be cooled. Not only that, but water can conduct 3,500 times as much heat as the same volume of air. Many vendors offer liquid cooling options that integrate cooling coils directly into the rack or server. One study from the Rocky Mountain Institute suggests that liquid cooling can reduce cooling energy costs by as much as 30% to 50% when compared against standard air-based CRACs.
IBM, Sun, HP, and Egenera all recently announced liquid cooling technologies that are expected to be more efficient than traditional forced air methods. U.S.-based Avista Corp., a Pacific Northwest utility, offers rebates (up to $3,600 per rack) to enterprises that install SprayCool-enabled servers from ISR Inc.
Airside/Waterside Economizers
Airside economizers bring in cool outside air when weather conditions permit, thus reducing the burden on air conditioning units. Data centers are particularly well-suited for economizers because they run 24/7 and so can take advantage of seasonal and daily temperature fluctuations to efficiently cool the space. Depending on outside conditions, an economizer can reduce data center cooling costs by more than 60% according to a recent report from PG&E. In order for outside air to be suServiceXenble for the data center, humidity levels must be checked and filtration should be provided in order to remove particulates and contamination.
Waterside economizers use a similar principal, but provide cool water to data centers operating with chilled water-cooled air conditioning. “Free” cooling is achieved using the evaporative cooling capacity of a rooftop cooling tower during cold weather to generate chilled water

Economizers are available as separate packaged units and as options on most CRAC units. In many cases they may be required by law. California has made them mandatory under Title 24 building code for units with a capacity greater than 2,500 CFM and 75,000 btu/hr. The current version of the American Society of Heating, Refrigeration, and Air-Conditioning Engineers (ASHRAE) Standard 90.1 recommends economizers, but also provides an exemption for certain cooling systems in exchange for more efficiently designed refrigeration.

Quick Wins

In retrofit situations, where comprehensive redesign is not an option, consider simple low-cost improvements to optimize existing air supply/return configurations.
» Look for air leaks. Leaking floor tiles, cable openings, and uncovered blanks in equipment racks can cause mixing of hot and cool air and reduce airflow efficiency.
» Use flexible barriers where fixed ones are not available. Barriers made of clear plastic or other materials can be used to cost effectively separate intake and exhaust airflows and reduce mixing of hot and cool air.
» Reposition poorly placed overhead air supplies. Instead of looking to cool the entire server room (mixing hot and cool air), reposition overhead diffusers so that they expel cool air directly in front of racks where it can be drawn into the equipment.
» Reposition poorly placed thermostats. Place the thermostat in front of the equipment where air temperature is more indicative of equipment temperature. Alternatively, for servers that support SNMP heat sensors, actual equipment temperature can be pulled directly from the management software.
» Bake heat/cooling requirements into the buying decision. Include specific terms for heat output, cooling capacity, and energy efficiency when writing RFPs for new equipment. According to Ziff Davis, 29% of IT shops are already including power and cooling considerations in purchasing criteria. Also, look for systems that include system diagnostics tools and thermal management capabilities. While not critical for smaller enterprise installations, these capabilities allow the processor performance to be adjusted to match the processing needs, thus reducing energy consumption and cooling requirements.

» Control fan speeds based on need. Fans set to operate at maximum capacity regardless of data center temperature waste energy. Adjust the fan speed of CRAC air handlers in order to reduce the air volume supplied during times when equipment is running cooler (i.e. at part-load).According to PG&E, small reductions in air volume supplied result in a non-linear reduction in fan power; so a 20% drop in airflow volume results in a 45% to 50% reduction in fan energy consumption.
» Optimize equipment placement within racks. According to the Rocky Mountain Institute, failure rates for server equipment are three times higher at the top of the rack than at the bottom because of rising heat. Reduce the amount of heat in the rack by placing the most heat-producing units at the top of racks. Also, reduce equipment failure by placing the most heat-sensitive units towards the bottom of the rack.
» Have dedicated high-density areas. Data centers with a mixed server environment can improve cooling efficiency by limiting the location of high-density racks to the area of the room that has the highest cooling capacity. If no such area exists, it is possible to create one using spot coolers.
Ask the Experts
In addition to best practices outlined above, steer specific data center projects using benchmarking information and efficiency guidelines from The Berkley Lab’s Data Center Energy Management Web site. Of particular post is PG&E’s whitepaper, “High Performance Data Centers: A Design Guidelines  Sourcebook.”
Large data centers and those building/expanding a new facility should engage the services of an HVAC engineering firm/consultancy to determine the appropriate cooling strategy. In the days of mainframe dominated data centers, HVAC engineers were involved in data center planning by default. However the early proliferation of standard x86 servers diminished the need for specialized heating and cooling knowledge. As heat and power issues loom large for data centers managers, this knowledge is once again becoming critical. IT decision makers will need specific expertise that they will not be able to glean from freely-available checklists and whitepapers. As an example, THINKresources.com lists some of the typical competencies required by HVAC experts.

Bottom Line

As energy costs begin to compete with the cost of maintaining the underlying hardware, IT decision makers no longer have the luxury of being complacent towards issues of power consumption and energy management. Any data center can be improved in regards to energy efficiency, cost efficiency, and environmental responsibility

Advertisements
3 Comments leave one →
  1. September 22, 2008 2:06 pm

    Delta Cooling Towers are made from a CORROSION PROOF engineered plastic. The cooling tower shell will never rust, flake, chip, peel or ever need painting or protective coatings applied. Delta believes its towers are the future of the industry. Metal towers do not have the long-term corrosion protection advantages for outdoor usage. The galvanizing or other metal treatments only delay the corrosion of the underlying, often thin gauge sheet metal.

    Delta manufactures a totally SEAMLESS cooling tower. Delta towers are the only large packaged cooling towers in the industry that have a “one-piece” shell. This means there are no seems, panels, rivets or hundreds of fasteners to fail or compromise the performance or integrity of the product.

    Delta stands behind its products with the best warranty in the industry. Delta provides a 15-YEAR WARRANTY on the cooling towers structural shell. In addition, Delta’s cooling tower motors have a 5-year warranty.

    Delta Cooling Towers are LOW MAINTENANCE by design. Delta has carefully designed its products to have minimal maintenance issues. Other manufacturers of cooling towers have much more complicated designs to achieve the same performance. This includes many more parts – which are more potential maintenance or require the maintenance of other towers. On Delta towers there are no gear reducers, couplings, additional shafts, or extra bearings to maintain.

    Delta prides itself on exceptional CUSTOMER SERVICE. Our “can-do” mindset allows us to meet or exceed any customer’s cooling tower requirements. We offer many product options and can provide any accessories associated with cooling towers.

  2. May 20, 2009 6:47 pm

    Solid writing / hope to definitely visit soon:)

  3. halesy permalink
    July 7, 2009 2:49 pm

    I was recently on a data centre design seminar that covered a number of the points above, although this post is more detailed. I’ve summerised the seminar I was at on my blog – http://halesy.wordpress.com/2009/07/06/iet-data-centre-conference/.

    Cooling though is a big issue, as well as getting enough power to the racks. While blade servers are helpful in reducing physical hardware, their power/heat consideration mean that careful planning are required.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: