Energy Efficiency
in Computer Data Centers

Steve Doty, PE, CEM
Colorado Springs Utilities

ABSTRACT

Computer Data Centers use a lot of electricity in a small space, roughly ten times that of a regular office building per SF. For a facility with both office and Data Center functions, it is not uncommon for 15% of the building area to use 70% of the electricity. (1)

Server Room

Most of the energy used in computer data centers is from the equipment itself, which is often beyond the control of energy improvement efforts. Still, the cooling systems serving these machines are substantial energy users and can usually be improved on.

Effective and practical energy conservation measures for Data Centers are provided with the basis of the savings and estimated savings values. This information will allow the Energy Engineer to formulate an effective approach for proposing improvements to these facilities.

INTRODUCTION

The primary design focus for Data Center cooling systems is reliability. Infrastructure items like power and cooling are servants to this central purpose. Although energy efficiency is a lower priority, some energy conservation strategies can serve both camps. Identifying ways to reduce energy consumption without affecting data center reliability is the goal of this article.

Computer equipment energy use dominates all other end uses in these facilities. Unfortunately, little is available to the Energy Engineer to impact computer equipment electrical use. Therefore, even substantial improvements in cooling efficiency will be overshadowed and diluted by the persistent high energy use by the computers themselves, and this fact poses a barrier to the economic attraction of proposed changes.

Many of the items in this article concern cooling systems. The primary terminal unit device used is a Computer Room Air Conditioner or "CRAC" unit. A key point to the Energy Engineer is that the cooling systems in Data Centers use the same principles as all cooling equipment, but economic opportunities are better since the load is steady and continuous. In many cases, the energy calculations are easier since energy use is largely unaffected by seasonal weather changes.

CHARACTERISTICS OF THE DATA CENTER

Priority #1 is Reliability. This bears repeating since it explains many of the decisions made at Data Center facilities, and needs to be thoroughly considered for any proposed system changes.

Fig 1. Load Profile.
Data Courtesy of Agilent Inc.

Fig 1. Load Profile. Data Courtesy of Agilent Inc.

Load Profile. Energy use is fairly consistent in most Data Centers, so the 24 x 7 x 365 assumption is fairly accurate. The facility is self-heating and so operations are largely unaffected by weather changes. With proper envelope design, the Data Center energy use would also be indifferent to geographic location. The following chart illustrates the load profile concept. This chart was prepared using actual sub-metered electric use information.

Inherent Resistance to Change. Some items presented are most appropriately 'built-into' a brand new facility, while others apply equally well to new and existing facilities. Like all designs, once a design feature is built-into the facility, its good or bad features are usually there to stay, with the usual best option for upgrades occurring near the end of a system life cycle when repair / replacement costs are eminent anyway. This is especially true for Data Centers, since their 24x7x365 nature provides an extra logistical barrier to system modifications. Like-kind replacement of equipment, with no infrastructure or control changes, are the quickest and lowest-risk approaches for facilities to take during system upgrades. Due to the often critical nature of the data processed by the facilities, the quick / low-risk approach is completely understandable, as is the acceptance of high utility costs as simply a cost of doing business reliably. For these reasons, a second or third opinion during the design stage is a worthwhile investment and should be encouraged by the Owner.

Who is Driving the Bus? Energy Efficiency 101 tells us to always begin with the points of use, and only then move to system and equipment modifications. In the case of a Data Center, the computer equipment energy use is the primary driver for the energy use, and dictates the cooling systems and their energy use. Therefore, the Number 1 suggestion for more efficient Data Centers must be to use more efficient computers. So, this article calls to the computer equipment manufacturers to consider efficiency gains as marketable improvements along with reliability, speed, size, and capacity. To illustrate, a (10) percent improvement in computer equipment energy use will have a greater effect than a (50) percent improvement in cooling system efficiency. This example reinforces the dominance of the computer energy use, and the concept that any possible improvements in computer equipment energy use should be considered before anything else.

Example: Comparison of 10% computer equipment efficiency gain vs. 50% cooling efficiency gain.

Baseline
Computer equipment: 750,000 kWh
Cooling equipment: 250,000 kWh
Combined annual electric use: 1,000,000 kWh
With 10% efficiency gain in computer equipment
Computer equipment: 675,000 kWh
Cooling equipment: 168,750 kWh
Combined annual electric use: 843,750 kWh
Overall improvement: 15.6%
With 50% efficiency gain in cooling equipment
Computer equipment: 750,000 kWh
Cooling equipment: 125,000 kWh
Combined annual electric use: 875,000 kWh
Overall improvement: 12.5%

Further, a 50 percent improvement in cooling unit efficiency is not realistic. If a 20% improvement is made to the cooling system energy use through efficiency gains; this would only end up reducing the overall electric use by 3-5%. This fact is important to keep in mind during customer relations, since they will be noting improvements to the overall use, not just the cooling systems. Specifically, a 5% improvement in Data Center electric use, with no changes to the computer equipment itself and no impact to system reliability should be considered a successful project. This is markedly different from many Energy Conservation projects where we would like to see 20-30% improvement as a banner of success.

The energy end-use breakdown between computer room equipment and cooling systems that serve them is predictable and easily quantified if the total energy use is known. Unlike many other energy using systems, the load profile for a computer data center is usually very consistent, and much can be learned from the computer equipment input power measurement which is usually displayed at the UPS serving it. Ignoring lighting and skin loads, the cooling system output is directly proportional to the computer equipment energy input, and the cooling system energy input is directly proportional to the cooling system efficiency.

Depending upon the type and efficiency (kW/ton) of the cooling equipment, the proportion of cooling energy to computer equipment use varies from 15-26%, with the higher percentage being from air cooled equipment - the least efficient. This is shown graphically below.

Fig. 2. End-use energy breakdown: Computer Room Equipment vs. Cooling Units.

Fig. 2. End-use energy breakdown: Computer Room Equipment vs. Cooling Units.

The first thing that should be apparent from reviewing the end use pie diagrams is the energy advantage of water-cooled equipment, which comes from the reduced refrigeration pressures and thermodynamic lift. In fact, the first cooling system choice will usually be Air-Cooled or Water-Cooled? This choice is not limited to considerations of energy use or even first cost, but the priority #1 design constraint which is reliability. Strictly speaking, the use of redundant air-cooled systems has the reliability advantage, because it is simplest - and has fewer single-point system failures such as common hydronic piping, and a fresh water source for the cooling towers. Ultimately, less is affected when any one element fails. But, reliability considerations are beyond the scope of this article. What we will discuss are some options to consider for both Air-Cooled and Water-Cooled systems that have energy implications.

ENERGY SAVING OPPORTUNITIES

Chilled Water:

Size Air Coils at 50°F Entering Chilled Water Temperature, and use Elevated Chilled Water Temperature. This is an integral part of a strategy to stop unintended dehumidification. See section "Humidification" for additional information on this topic.

Variable Air Flow systems further address the parasitic loss of the air circulation fan. While standard practice has always been constant flow variable temperature supply air, single zone VAV systems can also be effective and are available through the manufacturer for chilled water systems. VAV / DX remains a difficult application and the complication and risk to refrigeration components may not be warranted in Data Center cooling application. When VAV is employed, the air flow will be reduced in proportion to the load. Noting that Data center loads do not swing much, the savings should be analyzed against the cost to upgrade. Still, cooling equipment over-sizing is also common, in which case there may be a lot of air moving around the room for no real purpose, and with an energy penalty. Just like the lighting explanation, each 3-4 watts of extra fan energy results in a watt of cooling energy to remove the fan heat. Even if the fan speed were a constant 80%, the 24 x 7 x 365 savings could be significant.

Savings: At 0.3 fan motor Hp per ton, each 100 tons of cooling equipment would have around 32 kW of fan heat including the cooling load to remove it (3). Operating at 80% fan speed half the time, instead of 100 percent, and a 95% efficient VFD, the electric load for the fans would follow the fan laws and reduce electric demand an average of 7 kW (4), and reduce annual cooling system electrical use by 8%, and overall Data Center electrical use by 1.5% for systems using chilled water (5).

Air-Cooled Direct Expansion (DX)

Generous Sizing of outdoor equipment to have a large heat transfer area will also serve to lower head pressure. It is common practice to size these for 125 degree F ambient temperature, so over-sizing is usually already present.

Keep Condenser Surfaces Clean. Locate remote from trees, grass, or other sources of debris that can partially block the fins. The partial blockage will cause increased temperatures and pressures to achieve the same heat transfer, with accompanying kW input increase. Conventional dry coolers have the finned area horizontal to the ground, and so it is common to have debris drawn in to the units and blocking the fins without noticing it - crouching down and looking up underneath is the only way to be sure.

Adjust Head Pressure Regulation Devices to avoid un-necessary energy penalty. These include cold weather dampers, flooded receivers, etc. Each of these serve to false load the refrigeration cycle in cold weather, to stabilize refrigeration system operation. If mis-adjusted, they can increase energy use in warmer weather. This equipment is definitely needed in sub-zero weather for proper operation, but should be adjusted to be transparent to system operations above 30°F or so.

Maintain Proper Equipment Spacing to avoid air recirculation. This basic concept is often overlooked, especially when there are a number of large condensers taking up large areas. With walls or adjacent equipment too close, the inlet air path is restricted resulting in inlet air path changes, including some air from the warm discharge. The effect of this is for the unit to behave as if it were hotter than it were, with higher input kW. Following or exceeding manufacturer's clearance recommendations will assure this does not occur. A rule of thumb is the finned height of the vertical condenser or the height of the side air opening, projected horizontally; of course for two adjacent units this distance would double. At one facility, the array of remote condensers were too close together and a recirculation temperature rise of 7°F was measured, resulting in a 7-10 percent increase in compressor input power during warm weather.

Figure 3. Air-Cooled equipment spacing too close, causing recirculation and efficiency loss.

Figure 3. Air-Cooled equipment spacing too close, causing recirculation and efficiency loss.

Evaporative Pre-Cooling in favorable climates can lower head pressure and hot weather demand substantially. Like cooling towers, these function according to wet bulb temperature, and can achieve a temperature drop of 80% of the wet bulb depression. For example, if ambient air is 95°F dry bulb and 60°F wet bulb, the wet bulb depression is 95-60=35, and the modified entering air temperature for the air-cooled unit would be 95-(0.8*35)= 67°F. So, in this example, the unit would behave as though it were 67°F when it was 95°F outside, reducing head pressure and kW input at the rate of about 1 percent kW per°F, or 28% energy reduction. The system has no effect when outdoor temperatures are below 55°F, so seasonal improvements are much less than maximum savings. However these are very effective at reducing overall electric demand and enhancing cooling output during summer months.

Savings: Wet-bulb dependent, so savings will vary by locale. These are not 24 x 7 x 365 savings. 25-30% demand reduction and energy reduction in summer months is possible in drier climates.

Water-Cooled DX.

Select Equipment That Can Operate at Reduced Condenser Water Temperatures. Specifically, this would be equipment capable of reduced refrigerant pressures, without simply throttling the flow. Of interest is the average condensing temperature, measured by averaging the inlet and outlet temperatures. The ability to "accept" cold water, and adapt by reducing the flow is an operational feature but is not an energy-saving feature. Selecting equipment "that can operate successfully at 3gpm per ton flow rates and 60°F entering water" will steer the manufacturer to improved refrigerant management practices and will yield significant reductions in compressor kW whenever the colder condensing water is available.

Savings: Wet-bulb dependent, so savings will vary by locale. These are not 24 x 7 x 365 savings. Each degree F the average (outlet - inlet) water temperature is lowered will reduce compressor kW by 1-1.5%.

Use a Fluid Cooler instead of a Dry Cooler, to make lower condenser water temperature in summer. Compared to a dry cooler, condenser water could be provided at 10-30°F lower temperatures, with corresponding kW reduction at the compressors. Specify the fluid cooler to achieve "7°F approach temperatures (no more than 7°F above the design wet-bulb temperature) with no more than 0.05 kW/ton fan power budget." These two constraints will result in larger heat exchanger bundles and a large, free-breathing 'box', reducing the parasitic loss of the fluid cooler (or cooling tower) fan motor.

Savings: These are not 24 x 7 x 365 savings. Each degree F the average (outlet - inlet) water temperature is lowered will reduce compressor kW by 1-1.5%. Annual savings depends upon comparing the dry bulb (driver for the baseline dry cooler) and wet-bulb temperature profile (driver of the fluid cooler) for your location. Savings are most noticeable in summer and in drier climates. This feature can provide savings from consistent demand reduction in summer during utility peak periods.

Adjust Water Regulation Valves to keep system head pressure lower. These devices modulate water flow to control head pressure, but can increase compressor input kW if too 'stingy' with the water. Adjusting for 3 gpm per ton instead of 2 gpm per ton will lower the average condensing pressure and input power requirements. Note that this is a system change and must be coordinated with the circulating pump and pipe limitations in mind.

Savings: Noting the average of inlet and outlet temperatures, each degree F of average water temperature lowered will reduce compressor cooling kW by 1-1.5%. Assuming compressor power is 2/3 of the total unit power, reducing the condensing temperature 5°F will reduce cooling electric energy use by 4.2% and total Data Center energy use by approximately 0.8%.

Use Auxiliary Cooling Coils with a Fluid Cooler. These pre-cooling coils, located in the return air of each unit can take advantage of colder compressor cooling water. In drier climates, and when conditions permit, colder condenser water can be made with the cooling tower or fluid cooler for energy savings. With the use of a properly sized auxiliary coil in the return air stream, condenser water lower than room temperature can be used to pre-cool the return air, thereby reducing compressor kW. In this system, part of the water is routed to the pre-cooling coil and part of it is routed to the water-cooled condenser while the compressor continues to run. If condenser water is lowered sufficiently (to 45 -50°F) the pre-cooling coil can carry the entire space cooling load with compressors off for impressive savings. Sizing the auxiliary coil for full capacity at 50 degree F entering water temperature is suggested. This system is normally used with a plate frame heat exchanger or fluid cooler, to operate the computer cooling units as a sealed system; this avoids maintenance for fouled heat exchangers.

Use Auxiliary Cooling Coils and Link to the Central Chilled Water System. An extension of the auxiliary cooling coil system is to tie it to the building central cooling system, usually through a heat exchanger, and provide computer room cooling via the central chilled water system year-round. This unique application has several distinct advantages. By linking the computer room unit cooling water to a more efficient central chilled water system, the benefits include:

  • Reduced run time on the DX compressors for extended life and reduced repair costs. In this mode, the DX compressors become the backup cooling source, with the central chillers as the primary. A typical design life of this equipment, due to continuous operation, is 10 years. This measure could easily extend it to 20 years with corresponding tangible life cycle cost benefit. Savings: For each 100 tons of capacity, the cost benefit could be $10,000 per year by doubling the equipment life. (2)
  • Energy savings according to the differential efficiency between the central chiller system and the computer cooling equipment. Typical water-cooled DX units run at 1.0 kW/ton in summer and 0.8 kW in winter, while many central chillers run at 0.7 kW per ton in summer and 0.5 kW/ton in winter, including the auxiliary pumps and cooling tower fans. This kW per ton differential, applied over the array of computer cooling units, can provide substantial savings. In this example, the differential is a consistent 0.3 kW/ton. Savings: For each 100 tons of computer room cooling load linked to a central chiller plant with 0.3 kW/ton improved efficiency, savings of 30% of annual cooling kWh are possible.
  • Demand savings according to the differential efficiency between the central chiller system and the computer cooling equipment. Savings: For each 100 tons of computer room cooling load linked to a central chiller plant with 0.3 kW/ton improved efficiency, demand savings of 30kW for each 100 tons of capacity are possible.

Common Items for all Systems.

Raise the Room Temperature. Provide the necessary cooling, but no colder than that. If a bulk room temperature of 72°F is within the computer equipment limits, then this will provide economy in operation over lower temperatures, such as the traditional 70°F. Note that the 70°F is partly based on the desire to cool equipment, but also partly based on the "what-if" scenario of central cooling failure. The extra 2-degF may be seen as a security blanket, buying the system operators time to gracefully shut down the equipment before overheating.

Savings: Raising the room temperature from 70-72°F is a quick way to reduce cooling electric use by 2-3 percent, which equates to an overall 0.5% of Data Center electric use.

Reduce Overhead Lighting Power. Lighting directly adds to Data Center kW and cooling load. So, using as little as you can, and using highest efficiency lighting to do it with, is the energy-saving answer here. One suggestion is to pare down on the overhead "general" lighting and rely more on task lighting for inspecting equipment. Occupancy sensors can be used to advantage for rarely occupied areas.

Savings: Each 3-4 watts of extra light becomes a watt of extra cooling energy, in addition to the lighting energy. For a cooling density of 50 SF per ton and 1.2 kW/ton efficiency, the cooling load is 240 Btuh per SF, and the combined computer / cooling load (ref Appendix equation xx) is approximately 1/50*8760*1*1.2 = 808 kWh per year per SF. Lowering overhead light power density from 2 watts per SF to 1 watt per SF will reduce annual electric load by about 1.3 watts per SF including the cooling effect, or 11 kWh per year per SF. This is a 1.3% reduction in overall Data Center electric use.

Don't Provide Heaters in the Cooling Units. This may sound silly, but most standard computer room cooling units come with this automatically unless you request it otherwise. Heat should not be needed except in extreme circumstances, since Data Centers are a cooling-only load. As a general rule, specifying Computer Room Air Conditioners (CRAC Units) without heaters is appropriate. This not only avoids the initial cost of the heaters, but reduces connecting wiring size and prevents inadvertent use of the electric heaters from uncoordinated controls. The heaters are also used in a de-humidifying mode where the heat and cooling apparatus run concurrently. While computer room dehumidification may be required in some climates, the inherent dehumidification of the cooling coils should suffice. If there is an unusually high source of humidification such as a poor envelope moisture barrier, the recommendation would be to treat that separately. In areas with multiple cooling units, each with individual controls, the opportunity for unintended simultaneous heat-cool exists and has been observed in practice. Operational savings from this are difficult to predict, however eliminating the heater eliminates even the potential.

Savings: First cost of system installation is reduced, since the ampacity of the equipment will be much less without the heavy burden of the electric heaters. With a large number of cooling units, each designed to run with heaters on in cooling mode for dehumidification, the reduced equipment ampacity will reflect back to the electrical distribution system in the form of noticeably reduced wire sizes, smaller branch circuit panels, smaller main distribution panels, transformers, electrical service, and generators. For example, 10 units each with a 25 kW electric heater would result in generator capacity increase of 200 kW at 80% diversity, which would increase the generator cost approximately $400 per kW or $80,000.

Humidification: A necessary component in data center design is a minimum humidity level, since excess dryness encourages electrostatic discharge (ESD) and computer hardware problems. Therefore, some humidification will normally be needed for Data Centers to reduce static and sparks. Depending on the equipment manufacturer, 30-35% rH may be enough to prevent spark issues; humidifying to 50% rH should not be necessary. Since humidification consumes energy, raising humidify levels higher than necessary should be avoided.

The need for humidification, ironically, is due mostly to the natural dehumidification effect of the cooling coils. So the first thing to do is reduce the dehumidification. This is done by raising the cooling coil apparatus dew point by:

  • Lowering the room dew point, by increasing temperature and lowering relative humidity, after verifying that the environmental conditions meet the requirements of the equipment it serves.
  • Increasing coil size so the average coil surface temperature is elevated. Although not currently a standard offering, it may be possible to request a DX computer room unit with a mismatched compressor / coil pair, e.g. a 20 ton compressor and a 25 ton coil. Increasing coil size is done at the design stage, and may increase unit size and cost.
  • Increasing chilled water temperature for chilled water systems. This is an integrated design choice also, and requires that the needed heat transfer be verified through coil selection to be available at the higher temperature entering water, e.g. 50°F entering Chilled Water (CHW). Increasing chilled water temperature is a design decision also, and will steer the design to a segregated system whereby the computer room chilled water is operated at a higher temperature than the building systems, through either a separate system entirely or a heat exchanger and blending valve arrangement. Verifying coil capacity meets heat loads with higher chilled water temperature is a prerequisite to this measure.
  • Adjusting controls for wider tolerance of “cut-in” and “cut-out” settings, allowing indoor humidity levels to swing by 10% rH or more. The savings from this measure come from reduced control overlap between adjacent CRAC units, i.e. the controls of one machine calling for humidification while the controls of the neighboring machine calling for de-humidification.
  • Coordinating the unit controls to act more like “one big CRAC unit” than multiple independent CRAC units. The savings from this measure are similar to widening the control settings, which are from reduced control overlap between adjacent CRAC units. The standard use of multiple CRAC units, each with their own “stand-alone” controls, each with tight tolerance control settings, is a built-in opportunity for simultaneous heating / cooling and humidification / de-humidification. The overlapping controls are readily observed in the field and function, but with energy penalty. If such overlap can be avoided, energy savings will result. Note: depending upon computer hardware heat density, there will naturally be different conditions at different CRAC units, so independent temperature control at each CRAC unit is appropriate.
  • Control re-calibration each two years is suggested.
  • Increasing air flow to raise average coil surface temperature and air temperature. Note that this measure increases fan energy sharply and may exceed the avoided humidification savings.

Other sources of needed moisture come from envelope losses, especially in drier climates. For moist climates, this may be nil, or even beneficial. Moisture losses from the envelope can be mitigated from reduced air pressure differences between adjacent spaces, vestibules, gasketed doors, and an effective air barrier / vapor barrier on the floor, walls, plenums, and ceiling.

With humidification load reduced as much as practical, turn to the humidification equipment.

Standard humidification equipment is usually electric resistance pan humidifiers or infrared pan humidifiers. These are inexpensive and effective, but increase electrical use and connected load ampacity (remember the generator at $400 per kW). First, it is unlikely that every cooling unit will need a humidifier, so a more practical solution may be a free standing humidifier, one per “N” cooling units. Use judgment here and distribute the humidifiers in the room to promote equal distribution of the humidification. Also, make-up air units with humidifiers to temper the humidify level of the make-up (new) air have been used successfully to pressurize the rooms to minimize air infiltration in data centers.

Rather than electric resistance, infrared heating or steam humidification, opt for evaporative cooling technology (ultrasonic, pads, mist, etc.) instead. Pads and misters are the simplest, but tend to cause moisture carryover and associated ‘wet’problems. Ultrasonic type humidifiers are much more resistant to moisture carryover and may be preferable from a process standpoint. All adiabatic cooling units consume some electricity (circulating pump, compressed air, or the ultrasonic piezoelectric mechanism), but all use 1/10th of the energy or less to make the vapor, compared to making steam. Additionally, all adiabatic evaporative systems have a friendly byproduct which is evaporative cooling. In the case of a data center, cooling and humidifying is exactly what you need!

Savings: This is dependent on how humid the space is and the design apparatus dew point. At one customer site which was kept very humid (72 °F and 50% rH), equipment coil selections from the manufacturer confirmed that 15% of the equipment capacity was being spent on dehumidification, which equates to 2.5% of the Data Center Cooling total electric load. A review of manufacturer’s data revealed no pattern of the tendency to dehumidify, since one cabinet (and coil) size was used for several tonnage classes and air flows. Another facility, using more typical space conditions, was analyzed in depth with the following estimated savings.

Table 1: Example Energy Savings Related to Data Center Humidification

System
Type
Space
Condition
Measures Typical Savings
45°F chilled water 72°F
40% rH
Raise chilled water temp. to 50°F. 2% of cooling electric energy
Use adiabatic evaporative humidifiers instead of infrared. 8% of cooling electric energy reduced by not using infrared heat for humidification. 1% of cooling cost reduced from adiabatic cooling.
Combined 11% of cooling electric energy reduced, or 2% of Data Center electric use.

Figure 4. Typical free-standing evaporative computer room humidifier.

Oversize Filters or angled filters can be used to reduce air horsepower. Typically a third of the energy used in the computer cooling units is from the fans, which normally run continuously even as the cooling equipment starts and stops. For systems fitted with 2-inch filter racks, converting to 4-inch deep filters is usually easy. Whatever the means, reducing air flow resistance means reduced fan horsepower is possible for the same air flow. For existing systems, the fan will react to the change and increase air flow and little or no energy savings unless rebalanced to slow the fan and shed the kW.

Premium Efficiency Fan Motors will reduce the parasitic loss of the circulating fan, which is necessary but adds heat to the air. The higher efficiency motors simply add less heat to the air. Motor payback skeptics, like me, are advised to take another look at fan motors at a Data Center, since there is truly a 24 x 7 x 365 environment for the fans.

Raised Floor System Insulation and Air / Vapor Barrier. Raised floor systems are common, with the cold ‘supply’ air delivered, using the floor cavity as a pressurized duct. This means the floor will be a constant 55°F, year round, and so the floor should be insulated like any other duct, by insulating the floor pan above the ceiling in the floor below. Insulation would not be required if slab on grade. Also, the supply ‘plenum’ will naturally be at a higher pressure than the room, so any condensate drains located in this space should have trap primers and be suitably deep to avoid ‘blowing’ out the traps with the expensive cold air. Pipe penetrations should be sealed as they would be in a duct. A vapor barrier in the plenum should be provided as well.

Building Envelope Improvements will reduce loads from walls, roof, and glass, although these are usually quite small compared to the heat load of the computer equipment, but a poor envelope will serve to drive the summer demand up, which can raise the facility demand charges if a ratchet charge tariff is part of the utility rate structure. For this reason, the insulation in the roof and walls should meet energy code minimums, and there should be a minimum glazing and skylights. The best thermally functioning Data Center would be an insulated box with a vapor barriers and no windows at all.

Air Economizers are attractive at first glance, but warrant a closer look. Since computer cooling loads are weather-independent, the cooler evening and winter hours can potentially be harvested for free cooling in most areas of the country. But the free sensible cooling brings with it corresponding changes in relative humidity levels indoors; the extent of the swings depending upon how much outdoor air is used and how dry it is compared to the absolute humidity level of the indoor environment. In standard HVAC designs, these swings are usually ignored. In a data center, the changing humidity effect would be especially problematic in cold dry weather. Cold winter air, once warmed, would tend to over-dry the data center, aggravating static discharge issues. This would be less of an issue in mild, humid climates such as Florida. In all climates, the modulating action of the outside air / return air dampers with varying outdoor absolute humidity levels would cause an unpredictable (open loop) variable in the indoor air relative humidity. From a control standpoint, compensation for the humidity disturbance would best be accomplished by tempering the humidity level of the bulk outside air stream before the air handler mixing box, although freeze issues would make this problematic. Attempts to temper the mixed air or supply air streams would require an active humidification system capable of reacting quicker than the changes caused by the damper movement. Compensation at the room level ‘after the fact’ would almost certainly introduce objectionable swings in room humidity. Because of the outdoor air’s effect on room humidity levels, such a system would create a dependence upon the compensating humidity equipment and controls, in turn creating new reliability issues and failure modes. Even with appropriate controls, the cost of humidification in dry climates would negate much of the savings of the free cooling benefit. It is for all these reasons that air-economizers are seldom used in practice for data centers. Since simplification = reliability in most applications, standard design practice for efficiency enhancements will look to water-economizers instead of air-economizers.

Quality, Skilled Maintenance contributes much to life cycle energy efficiency, but slowing the natural decline of systems after first started. There can be a big difference between “running” and running well. Training and incentives for quality operators are important.

  • Insist on designs that provide ample clearance for maintenance, thereby encouraging maintenance.
  • Understand the fundamental concepts influencing energy use and utility costs, including demand costs, thermodynamic benefits of reducing head pressure and raising suction pressure, the energy penalties associated with simultaneous heat/cool or humidify/de-humidify operations. Understand all equipment operation sequences and features, and their basis of operation.
  • Encourage the maintenance staff, in turn, to provide education to the Data Center users who are in the space continuously and may be adjusting cooling system settings inappropriately.
  • Encourage the maintenance staff to look for and report opportunities for improvement.
  • Establish “new” condition baseline performance values and compare day-today readings to baseline values, for early detection of performance issues.
  • Look for and prevent condenser air recirculation.
  • Verify annually the operation of head pressure control devices and assure they work in winter and do not work in warm weather. They should be transparent to system operation above freezing temperatures.
  • Monitor heat exchanger approach temperatures, for fouling
  • Clean cooling coils annually for sustained as-new heat transfer performance.
  • Control instrument calibration and review of set points is recommended each two years to prevent waste control overlap – which can act like driving with the brakes on.
  • Use high quality cooling coil air filters. Minimum filter efficiency of MERV-7 is suggested. Verify that there are no air-bypass pathways, so all the air is routed through the filters.

CONCLUSIONS:

Recognizing that limited opportunities exist to reduce the computer equipment energy use itself, this paper addresses the supporting elements of the Data Center facility, namely the cooling and lighting systems. Efficient lighting and reduced overhead lighting with supplemental task lighting directly reduces heat load and is encouraged. Many of the cooling suggestions involve basic heat transfer. Some of the less well understood, but common fallacies in Data Center facilities are control overlap and unintended dehumidification as the primary driver for the need for humidification at all. Understanding the fundamental reasons for the savings will lead the Energy Engineer and Facility Operators alike to viable energy and demandsaving opportunities, without compromising the reliability.

Table 2: Summary of Computer Data Center Energy Savings Opportunities.

Chilled Water

Measure New or Retrofit Basis of Savings
Size Coils for 50°F CHW, and Use Elevated CHW temp. NEW or RETROFIT Prevent simultaneous dehumidification / humidification
Variable Speed Fans NEW Reduce parasitic fan heat losses.


Air Cooled DX

Measure New or Retrofit Basis of Savings
Generous Sizing of Air-Cooled Condensers NEW Improve heat transfer and reduce approach temperature, improved refrigeration cycle.
Keep Condenser Surfaces Clean NEW or RETROFIT Reduce head pressure for improved refrigeration cycle.
Adjust Heat Pressure Regulation Devices NEW or RETROFIT Prevent unintended falseloading of the refrigeration equipment in warm weather.
Maintain Outdoor Equipment Spacing. NEW Prevent air recirculation, keep head pressure low for improved refrigeration cycle.
Evaporative Pre- Cooling NEW or RETROFIT Reduce head pressure for improved refrigeration cycle.


Water Cooled DX

Measure New or Retrofit Basis of Savings
Select Equipment to Operate at Reduced Condenser Water Temp. NEW Reduce head pressure for improved refrigeration cycle.
Use a Fluid Cooler instead of a Dry Cooler NEW or RETROFIT Reduce head pressure for improved refrigeration cycle.
Adjust Water Regulation Valves NEW or RETROFIT Reduce head pressure for improved refrigeration cycle.
Use Auxiliary Cooling Coils with a Fluid Cooler. NEW or RETROFIT Evaporative cooling reduces load on the refrigeration system, by pre-cooling or allowing the compressors to stop.
Use Auxiliary Cooling Coils and Link to the Central Chilled Water System. NEW or RETROFIT Higher efficiency kW/ton at central cooling system compared to computer cooling equipment saves energy and demand.

Reduced run time of computer cooling compressors extends equipment life.


Common Items

Measure New or Retrofit Basis of Savings
Raise the Room Temperature NEW or RETROFIT Reduce thermodynamic lift by raising the refrigeration cycle low pressure.
Reduce Overhead Lighting Power. NEW or RETROFIT Reduce cooling heat load from lighting.
Don’t Provide Heaters in the Cooling Units NEW Savings in electrical infrastructure, including generator.
Lower room humidity setting to 30% NEW or RETROFIT Prevent simultaneous dehumidification / humidification
Use evaporative or ultrasonic humidification instead of infrared or resistance heat NEW or RETROFIT More efficient technology for humidifying.

Part of cooling load displaced by adiabatic cooling
Oversize Filters NEW or RETROFIT Reduce air flow resistance, in turn reducing fan motor kW and parasitic fan heat loss if fan speed is adjusted down.
Premium Efficiency Fan Motors NEW or RETROFIT Reduced fan motor kW and parasitic fan heat loss.
Raised Floor System Insulation and Air / Vapor Barrier NEW Reduced thermal loss to adjacent floor below.

Reduced air and moisture losses by proper sealing of the plenum floor and walls, as well as plumbing and pipe penetrations.
Building Envelope Improvements NEW Reduced solar and thermal influence from building envelope.
Quality, Skilled Maintenance Personnel NEW or RETROFIT Reducing the natural tendency of systems to decline over time. Educating end user to participate in energy saving practices.

REFERENCES

1. “Thermal Guidelines for Data Processing Environments”, 2004, ASHRAE.

2. “Utilizing Economizers Effectively In The Data Center” White Paper, 2003, Liebert Corp.

FOOT NOTES.




1. Based on utility history and square footage figures of two facilities in the Colorado Springs area, with both standard office and raised-floor computer room data center occupancies.


Back To Top

2. Based on assumed equipment replacement cost of $1,000 per ton, and extending replacement life from 10 years to 20 years.


Back To Top

3. Assume 85% motor efficiency and 0.3 hp/ton * 100 tons = 30 hp. 30 hp * 0.746 * 1/0.85 = 26 kW. The 26 kW creates a cooling load of 26*(3413/12000) = 26*0.28 = 7.3 tons. At 0.8 kW/ton, this is an additional 5.8 kW input load. Total fan baseline kW input is 26 + 5.8 = 31.8 kW.


Back To Top

4. Base case is 31.8 kW for 8760 hours and 278,568 kWh. With a Variable Frequency Drive (VFD) efficiency of 95% added to the fan, the full load input would increase from 31.8 to 33.5 kW. If operating half the year on full speed, the energy use for that period would increase by 1.7 kW (33.5-31.8) and 7446 kWh. The other half the year operating on 80% speed, the energy would reduce as the cube of the speed change, 0.8 y3 = 0.512; resulting in 33.5 * 0.512 = 17.2 kW input and 75,336for that period, and improvement over the base case of (278,568 / 2)-75336 = 63,948 kWh. Net savings is 63,948 – 7446 – 56,502 kWh. Demand reduction is 31.5-17.2 = 14.3 kW best case, 7.1 kW average (half). 5. 100 ton load at 0.8 kW/ton requires 80 kW of cooling input. For the year, this is 80*8760 = 700,800 kWh. The VFD fan savings represents 56,502 / 700,800 = 8% of cooling electric energy. With a chilled water system at 0.8 kW/ton with 81/19% split from Figure 1, the overall Data Center electric savings is 8 * 0.19 = 1.5%.


Back To Top

5. 100 ton load at 0.8 kW/ton requires 80 kW of cooling input. For the year, this is
80*8760 = 700,800 kWh. The VFD fan savings represents 56,502 / 700,800 = 8% of
cooling electric energy. With a chilled water system at 0.8 kW/ton with 81/19% split
from Figure 1, the overall Data Center electric savings is 8 * 0.19 = 1.5%.


Back To Top

APPENDIX

Derivation: Proportions of Cooling Energy and Computer Equipment Electric Energy Use if Total kWh is Known, and Predicting Annual Cooling Energy Use from Computer Equipment Electrical Use and Cooling System Efficiency.

The complete derivation is available from the author on request. Basic conclusions are shown below. The derivation equates “Q” of the computer heat to “Q” of the cooling system load, and ultimately provides proportions of equipment / cooling loads for various cooling system efficiencies. (Note: this calculation ignores lights and building envelope loads)


ABOUT THE AUTHOR

Steve Doty is an Energy Engineer for Colorado Springs Utilities, providing technical support and facility audits for the larger commercial and industrial utility customers. Steve is a registered Professional Engineer in several states and has a 20+ year background in mechanical design, controls, and commissioning.

Please click here to Request a Quotation for any of your humidification needs.

Or you can always call CTM toll free at: 1 800 311-1200


Back To Top