Storage virtualization helps cities cope with spiraling demands.
Andy Lefgren relies on virtualization to better manage the city of Ogden’s storage systems.
Diamonds are a lot like storage compliance requirements — both last forever. But while diamonds are valuable because of their scarcity, storage needs proliferate.
“We started out with 4 terabytes on our storage area network seven years ago and are now running 48 terabytes,” says Andy Lefgren, system administrator for the city of Ogden, Utah.
To manage this quantity of data, Ogden, like a growing number of public organizations, has adopted storage virtualization. The technology unites multiple storage devices into what appears to be a single storage pool. By abstracting the many physical devices into one logical layer, virtualization makes it easier to centrally manage and back up data.
Ogden uses LeftHand Networks’ NSM 4150 iSCSI SAN hardware, which fits 11TB of storage into a 4U rackmount box. “We were impressed with LeftHand’s iSCSI technology,” says Lefgren. “Since iSCSI uses the TCP/IP protocol, we only have to put in another network interface card to have another 1 gigabit per second or more throughput for our servers.”
The NSM 4150 also comes with LeftHand’s SAN/iQ software to manage storage and provide snapshots and remote copying. “By virtualizing storage, we can carry out a snapshot on the block level to recover files faster aconduct backups that use less throughput on the network,” Lefgren says. “You can also take virtualized storage, copy it over and have yourself a test environment within minutes.”
Nearly half of storage production environments at Fortune 1000 companies were targeted to be virtualized by 2009, according to theInfoPro.
The concept of storage virtualization is far from new. “That is what a logical volume manager has done for all OSes for 20 years or more: virtualized a physical disk into a logical disk,” says Richard Jones, vice president and service director for data center strategies at the Burton Group.
What’s new and driving interest, he says, are all of the storage management features that are being virtualized in arrays and appliances.
Deduplication is of particular interest to the Fulton County (Ga.) IT department. The county uses a three-tier virtual storage system based on EMC Symmetrix (Tier 1), EMC CLARiiON (Tier 2) and EMC Centera (Tier 3) for long-term archiving, including 911 call recordings.
“We are proud of our storage, but the fact is we can’t buy enough storage to keep up with the explosive growth,” says Jay Terrell, chief technology officer of Fulton County. “We have to acquire tools that will help us manage and control that growth.”
Another tool that harnesses the county’s virtualized storage tiers is Symantec’s intelligent archiving platform. Enterprise Vault uses classification engines to help organizations manage and discover information in e-mail systems, file servers, instant messaging, content management and collaboration systems.
“We already do some deduplication with our e-mail archiving,” says Terrell. “I may send an e-mail with a big, fat attachment to 15 of my colleagues, but they all have pointers to that attachment in their archived e-mail; only one copy is actually archived.”
Beyond this added functionality, scalability is another popular feature driving the adoption of virtualization. Chris Beck, network administrator for the city of Fontana, Calif., says that the technology’s low management, increased efficiency and improved performance over traditional RAID arrays led his city to replace its direct attached storage with an HP StorageWorks Enterprise Virtual Array (EVA) that uses block-based virtualization.
“Our biggest issue was that we didn’t want to over- or under-allocate storage,” says Beck. “Virtualized storage made it easy for us to start minimal and increase easily as needs increased.” EVA has met his expectations. When additional storage is needed, it can be added or reallocated on the fly, without downtime or interruption.
“With the EVA’s virtualization, we simply add disks to the array and we have more available storage,” he says. “We don’t have to create new arrays or rely on software applications to group storage into logical units.”
Features such as thin provisioning, data deduplication, continuous data protection, snapshot and replication technologies and data tiering all fall under storage virtualization.
Perhaps the biggest decision that buyers need to make in selecting a storage virtualization device is whether to go with file-level or block-level virtualization. Simply stated, file virtualization deals with network-attached storage (NAS) and file servers, and is built around virtualization of files and file systems; block-level virtualization is focused on SANs and virtualizes blocks of data as opposed to individual files.
Jones reports that the trend is toward block-level virtualization, which lets administrators build any type of file system on top of it. “The biggest advantage, though, is that you have a reduction in the actual overhead,” he says. “File-based protocols are semantically rich, which means you have a lot more overhead.”
Notably, both Fontana and Ogden use block-level virtualization. Lefgren explains that before virtualization, he was forced to back up each system separately. Now, with one virtual pool, backups are much easier.
“Now we can do a snapshot at the block level, and backups demand less network throughput than before,” Lefgren says. “We take a snapshot and then mount that snapshot to our backup server so there is no bottleneck.”
Jones says that block-level virtualization is particularly useful for database applications, which work best when they have unencumbered access to the raw disk. File-level virtualization and NAS, though, may work better for other types of applications.
“Web work favored NAS deployments initially because it enabled you to coordinate access to files as you scaled out a web service by adding more servers,” says Jones. “With block-based storage, you still need a file system in there that will allow that coordination.”