A Balance of Power
Server virtualization and densely packed server racks have the potential to make IT operations more efficient than ever. But along the way, IT managers must learn an important lesson: It takes more than virtual machines and blade hardware to achieve success when consolidating data centers.
Equally important is how to re-engineer fundamental strategies for supplying power and cooling for these new resources. The reason being that, while densely packed server racks generate ever-increasing computer power, they also typically generate more heat.
Taking a fresh look at power and cooling isn’t important only for virtualization projects, but also for power management in general. Power management has become a continuity of operations imperative as the cost of electricity rises while supplies, especially during peak hours, can be unpredictable in some areas.
Fortunately, along with new challenges, there are new options for controlling energy consumption and costs. Server- and power-management hardware options now come with monitors and other tools for onboard intelligence that support smarter energy use.
These tools help businesses identify areas where they can reduce power consumption. This complements a range of proven design practices that can save money and reduce the risks of electricity brownouts and blackouts.
Take a Closer Look
The first step is to understand the new realities of power and cooling. One measure of change is the amount of power that data center managers must allocate to their servers.
In the past, when virtual servers were rare and hardware consolidation wasn’t an imperative, data center designers planned to deliver 3 to 6 kilowatts of electricity to individual server racks.
In today’s virtualized and densely packed world, a rack stuffed with blade servers can require between 15 to 20kW of power. Meeting those demands requires more than just additional power lines to beef up supplies to each rack.
“You have to cool what used to be the equivalent of three or four racks,” says Nik Simpson, research director for data center strategies at Gartner. “And you need to be able to do that without paying excessively for the privilege.”
To balance power needs and protect against budget overruns, it makes sense to look beyond what the monthly electricity bill conveys about consumption habits. Managers can now take a closer look at power at multiple points within the data center.
Many centralized uninterruptible power supplies (UPSs) feature energy systems for tracking month-to-month consumption. And gauges mounted on power distribution units (PDUs) can access the power draws of branch circuits feeding individual server racks.
Close scrutiny of power has three advantages:
- It gives managers more complete information about their organization’s energy needs, which can help to formulate more accurate budgets. This holds true whether the data center pays for power directly or through charge-backs to the facilities department.
- It provides detailed data that can help managers determine whether individual server racks have adequate power or are close to reaching supply thresholds that risk costly downtime.
- It can identify pockets of “captive power” — excess power allocated to an area that doesn’t need it, even as other parts of the data center are starved for power. Armed with such usage information, administrators can redirect power resources to give a second rack additional capacity without increasing the overall utility bill.
Hardware Innovations in Power Management
In addition to traditional price-performance considerations, IT managers should look for servers that support lights-out management. LOM facilities rely on hardware with embedded tools for monitoring and controlling units while they’re running or even after they’ve been powered down during off-peak hours.
One widely used system for LOM is the Intelligent Platform Management Interface (IPMI). IT managers can use it for a variety of assessments, such as how hot a server is running and how many volts it’s drawing.
Some server manufacturers offer centralized power management utilities. HP’s Integrated Lights-Out Power Management Pack enables organizations to not only quantify server power usage, but to also throttle power resources based on real-time readings of server workloads.
Similarly, IBM-Tivoli Endpoint Manager for Power Management software provides a central console for managing the power settings of all connected systems running Microsoft Windows and Mac OS. The utility’s dashboard shows key metrics, such as power consumption and carbon dioxide output and can help administrators analyze the ongoing power costs of a unit.
Other hardware has also received a power-management makeover in the data center. For example, sophisticated online UPSs now offer double-conversion technology to continuously convert incoming AC power into filtered DC power, and then turn it back into AC power, notes David Slotten, vice president for product management at Tripp Lite.
The constant filtering process offers the highest protection against power problems for expensive IT equipment. The trade-off: This level of diligence constantly draws power itself. One answer is to run the UPS devices so they’re not constantly converting incoming AC, but keep them ready to kick in to double-conversion conditioning.
“When you sense degradation in power quality, you go into the less-efficient quality-management regime,” Slotten says. “You can save 5 to 10 percent just by operating the UPS in its most economic mode.”
Data center managers have another option for getting the right amount of power to each server rack. Modular busway power distribution systems include input connectors for taking in power from UPS or PDU equipment. A series of bus plugs then distribute power throughout the rack.
These systems reduce the cabling required to bring power to rack units, which makes it easier for IT managers to add new equipment without the help of electricians. Busway power distribution can cut installation time and costs by up to 30 percent compared with traditional cable and conduit solutions, according to Emerson Network Power, a manufacturer of power and cooling equipment.
A Cure for Hot Spots
Cooling is the other side of the power management coin. As hardware in data centers becomes more dense, IT managers are taking a targeted approach to keeping these environments cool to avoid downtime risks from overheating.
“The ability to direct cooling to specific spots is becoming more important,” says Gary Anderson, AC power business development manager at Emerson Network Power.
Spot cooling, with the help of a new generation of precision temperature control systems, augments traditional strategies for keeping ambient temperatures within a specified range. The new options combine fans and coils of piping that contain refrigerant and bring cool air as close as possible to heat sources.
IT managers can mount them wherever they’re needed — on the top, sides or back of a server rack, or above a row of servers. Modularity is another selling point. Organizations can quickly reconfigure the refrigerant piping to accommodate new equipment or redirect cooling to racks that experience a spike in heat output from heavy usage.
Targeted cooling also relieves some of the energy demands associated with large under-floor fans that distribute cool air throughout data centers.
“Those fans use a lot of power,” Anderson explains. “With the new systems, data centers can put a small fan directly above the racks so they don’t have to push air as far, thereby requiring less energy. They also can conserve energy and reduce wear and tear on the fans by using variable speed drives that adjust fan speed to meet changing cooling needs.”
Modular cooling systems are especially effective when they’re paired with cold-aisle containment strategies. These systems enclose a row of server racks, sealing in cold air and keeping it where the potential for hot spots is highest rather than letting the conditioned air float off into the room at large.
“Whenever an organization installs a cooling system and cold-air containment, we see the total power draw associated with cooling go down significantly,” Anderson says.
Understanding the close relationship between power monitoring, power distribution and cooling technologies, some vendors offer packaged solutions that combine the necessary components for all three areas.
Emerson Network Power’s SmartRow solution, for example, integrates up to six data center racks with precision cooling; UPS devices; and power management, monitoring and control technologies, all within one enclosure. SmartAisle is a larger integrated system for up to 40 server racks.
“They’re pre-engineered to include power, cooling and monitoring all in one place and work together,” Anderson says.
IT managers are also relying more on natural resources by pumping in cool outside air. In some seasons, this can significantly reduce or eliminate the need for artificial air conditioning.
Continuity Considerations
The new emphasis on detailed monitoring practices has ripple effects for continuity of operations planning. After all, even minor power problems can damage precision servers and systems, leading to costly downtime and equipment repairs.
Even riskier are the potential consequences of a brownout or a blackout, which have the potential to bring an entire data center down for hours or days.
A first line of defense should be the power-filtering and battery-backup resources available from UPS units. Tools for detailed power monitoring can also map consumption and availability trends over time to help IT managers gauge risks and negotiate contingency plans with power companies.
But it’s also wise to plan for worst-case scenarios. “You’ve got to think about what happens if you’re suddenly not connected to the grid anymore,” Simpson advises. “If it’s a 20-minute outage, you’re fine with your UPS gear. But if you’re going through a multiple-day outage, you’ve got to have backup generators.”
That means upfront capacity planning for generators with enough output to keep the data center running. “It’s not just the servers, it’s the cooling systems and the offices for the staff that manage the data center,” he says.
Rather than incurring the expenses of this wide-scale approach, some organizations target continuity strategies to specific areas of their data centers. This involves installing redundant power resources to servers that run core enterprise apps and databases that would cause significant harm if disrupted.
IT chiefs could then allocate smaller and more economical generators for less critical systems. They could also identify equipment that can remain idle without causing long-term harm.