In this age of ever-shrinking IT budgets, it has become increasingly difficult for IT managers to obtain approval for new server purchases. Recently, however, chipmakers and server manufacturers have designed high-density servers that consume far less energy than older models. Now the case can be made for modernizing data center hardware by proving that new equipment will significantly reduce energy consumption, therefore offsetting the cost of the purchase.
It isn’t enough to simply explain that a new server is going to use less power. Unless an organization places greater importance on green initiatives than on cost containment, the IT manager will need to quantify power usage and translate the perceived savings into actual dollars.
To calculate potential savings, determine how much power is being used by the server that needs to be replaced. There are three steps to accomplish this.
Counting the Cost
First, figure out how much the organization is paying for electricity. Rather than asking the accounting department for a copy of the electric bill, contact the power company directly and inquire about rates. Keep in mind that electric costs are measured in kilowatt-hours and, in many areas, the price per kilowatt-hour fluctuates based on the time of day. For instance, the electric company may charge more for electricity during business hours.
Second, establish a utility cost baseline for an aging server by figuring out how much electricity the server is actually consuming. Because the power company bills the organization per kilowatt-hour of energy used, these measurements must be taken in kilowatt-hours. Several vendors sell electric meters that can be plugged in between a server and the electrical outlet to monitor power consumption.
Third, factor in the server’s workload. Server workloads tend to fluctuate throughout the day, and periods of peak usage will result in higher power consumption. Because there are likely to be fluctuations both in power consumption and in electric rates, you’ll most likely need to log power consumption on an hourly basis over an extended period of time and then factor in the rates for each time period.
Once you’ve established a baseline for your current servers, determine how the new server will compare and then extrapolate the cost savings. The hardware manufacturer should be able to provide power consumption estimates. Make sure that the estimates are based on a server that’s configured in a similar manner to the server that you intend to order, as opposed to a stripped-down model. You’ll also need to know if the vendor’s measurements were taken while the server was running a heavy workload or while it was idle.
Finally, determine how much of a load you expect to place on the new server. Estimating your anticipated workload, you can gain a better sense of how much power the new server will consume. For example, say that your current server runs at a CPU load of 80 percent during periods of peak usage. Now suppose that the new server is not only more energy-efficient, but also more powerful, and that a peak workload consumes only 30 percent of the available CPU resources. That means the energy savings would actually be greater than if the new server’s utilization levels were identical to that of the old server.
There are a number of ways to estimate the power usage for new hardware. For instance, Microsoft Research’s Joulemeter project seeks to perform power modeling in a way that can accurately estimate power consumption per virtual machine.
Whatever method you use to estimate power consumption, it will require a reliable baseline measurement of your current consumption, and you’ll need to know the cost per kilowatt-hour of electricity. Only then can you somewhat accurately estimate the cost savings.