Apr 15 2010
Data Center

High-Availability Benefits from Virtualization

State and local governments combine virtualization with fault-tolerant hardware for ultimate resiliency -- and to ease server management.
May 2010 E-newsletter

SANs: Keeping Data on Tap

Pump Up Availability with Virtualization

Review: Compellent Storage Center

Get Ready for Unified Fabric

Exchange HA: Keeping on Message

It's been said that some organizations are too big to fail. The same is true of your critical applications and data. A server crash, a power failure, even user error can cause your systems to become unavailable precisely when you need them most. That's why more state and local organizations are turning to virtualization to ensure high availability of their most critical IT assets.

"The benefit to using virtualization for high availability is that it's much simpler for IT managers," says Dan Kusnetzky, vice president of research operations for the 451 Group. "You don't have to change applications manually if they're running inside encapsulated servers or clients using motion technology. Virtualization offers simplicity, in that you have multiple machines running on a single server and the workload can move back and forth as needed."

Of course, high availability means different things to different people. For some, it's having a virtualized system where, if a critical app or even an entire server fails, a new virtual machine automatically takes over within minutes or possibly seconds. For others, it's using fault-tolerant servers that provide full hardware redundancy, allowing for real-time replication of processing and data storage and assuring uptime that approaches 99.999 percent. 

Keep It Moving

For employees of Steamboat Springs, Colo., the most critical application is e-mail, says Vince O'Connor, the city's information systems engineer. He experienced the value of a highly available virtual system when the physical machine hosting his city's Microsoft Exchange Server failed a few months ago. The data center's Citrix Systems XenServer environment and its storage area network automatically failed over to a new virtual host without losing a single e-mail.

O'Connor says he only realized the physical host had gone offline when he went to take the server down to apply an update and discovered it wasn't running.

"We didn't miss a heartbeat," he says. "Our system and performance monitors didn't even send me an alert when it failed. This is where using a virtualized environment can really save your bacon."

For the Cunningham Fire Protection District outside of Denver, every app is considered critical, says David Markham, division chief of support services.

"We are a 24-by-7-by-365 operation," he says. "Most of the services provided are critical to the daily operation and delivery of emergency services. Everything we do has a link to service delivery, so it's all important."

Like Steamboat Springs, Cunningham uses Citrix XenServer and network-attached storage to continuously replicate applications and data between two data centers, which are linked by fiber-optic cable. Two servers at Cunningham's primary site provide the initial layer of high availability, says Markham.

"The virtual servers will fail from one server to the other automatically," he says. "Should the main site fail, there is duplication of storage at a remote site and two servers there to failover to. The process is not automatic, but takes about 5 minutes to make active."

Virtualization alone, however, won't guarantee continuous operation. The most reliable approach is to create a virtualized environment using fault-tolerant hardware to synchronize data processing across multiple virtual machines.

 "The lowest level of high-availability requirements can be met by virtual machine software combined with motion technology, but the highest levels of availability cannot be achieved by virtualization because the transition time is too long," says Kusnetzky.

For environments that cannot tolerate even a few seconds a month of downtime -- such as electronic funds transfers, where even small delays in transaction time can cost millions -- you still need nonstop continuous computers.

"Put in boxes designed for continuous availability, have virtualization software running on them," Kusnetzky says, "and you'll never see a failure."

Three Questions to Answer

Which apps require high availability? Enabling an app for high availability typically costs more because of the need for redundant hardware and software. For that reason, an organization must decide which systems really are critical and need to have 24x7 availability.

What's the required uptime? An organization must also decide how much downtime is acceptable. Will going offline a few minutes a month affect your operations? How about a few seconds? Apps that need to run continuously require more planning and an investment in fault-tolerant hardware.

Is there a continuity strategy? Even the best failover strategy will falter if a natural disaster wipes out an organization's regional infrastructure. If you need five-nines uptime, be prepared to replicate critical systems and data at a second location -- ideally, in a different time zone.

Availability by the Numbers

27%  Data centers that have experienced an outage in the past year

56%  Data center managers who identify availability as their No. 1 priority

1st  Rank of human error as the cause of most data center outages

50 minutes Average enterprise IT downtime per week

3.6%  Annual enterprise revenue lost to downtime, on average

Sources: Aperture Research Institute, Emerson Network Power, EMA Research, Infonetics Research

Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT