Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Apr 08 2016
Data Center

When Is Hyperconvergence Right for Your IT?

Learn how to determine what type of environments are best suited to hyperconvergence.

IT managers who are considering refreshing their data center hardware have certainly heard the pitch for hyperconvergence, which tightly couples compute, storage, network and virtualization into a single server. The technology promises to be cheaper, faster and easier.

With hyperconverged systems, state and local governments no longer need to buy separate pieces of equipment. The reality, of course, is a little different. But that doesn’t mean that organizations can’t learn about and benefit from the world of hyperconvergence.

The Underlying Business Need

To understand the technology, start with the problem that needs to be solved: properly engineering and managing platforms to support virtual systems. Most data centers house a hodgepodge of solutions from different manufacturers. It’s common to find a storage-area network (SAN), networking hardware, servers and a hypervisor, all from different vendors. Each product has its own characteristics and management, and if they are integrated, that’s just because someone worked hard to make it all go smoothly. That’s the way IT departments have always done things, and that’s the natural evolution for most data centers.

Hyperconvergence dictates that organizations start with a pre-engineered system, usually from a single vendor, that combines compute power, storage and networking. This provides a single management system for everything and a single manufacturer to rely upon.

Then, hyperconvergence offers the promise of scaling virtualization simply by adding inexpensive modular components based on commodity x86 hardware. Instead of worrying about how to expand a SAN and how to add more ports to the network, IT managers simply order parts that can scale broadly and deeply, and then spend their time providing a solid application platform.

According to Gartner, the midmarket sweet spot for hyperconvergence is 80 to 120 servers, 30 to 50 terabytes of storage and 80 percent to 90 percent virtualization. IT managers who are considering implementing hyperconvergence in the data center should consider the following questions to determine if the technology is appropriate for their environments.

Understanding the Level of Risk

The players that pioneered hyperconvergence are all small startup companies such as Nimble Storage, Nutanix and Scale Computing. With any new market and new approach, some vendors may not make it. IT managers who invest in a hyperconverged infrastructure should be comfortable with the idea that they may need to switch manufacturers the next time they want to grow.

To be sure, not all hyperconvergence comes from small vendors: VMware’s EVO-RAIL software builds on hardware components built by partners such as Dell, Fujitsu and Supermicro, which have modified their basic servers to meet the specifications of EVO-RAIL.

Those who have a low appetite for risk in server vendors may want to avoid hyperconvergence. However, an interim step, called simply “convergence,” offers many of the advantages of hyperconvergence (such as tightly controlled engineering for virtual workloads) through partnerships between vendors who are working together to provide a single solution. It’s not as clean a package as buyers achieve with hyperconvergence, but a converged architecture avoids the silos of hardware and software that most data centers are built on. If convergence appeals to your organization, consider Hewlett Packard Enterprise or VCE (Cisco/EMC/VMware) for potential solutions.

The Right Workload Types for Hyperconvergence

Hyperconverged systems are all aimed at the middle of the market, where the number of virtual machines and their storage requirements are fairly mainstream. That’s the sweet spot, where most organizations fit. But those that have unusual workloads, such as petabytes of data or requirements for super-massive CPU or memory, may find that hyperconverged products don’t offer a good fit.

Because hyperconverged systems are based on commodity hardware, they’re going to look a lot like high-end x86 servers with local storage. If a workload doesn’t fit into these standardized building blocks, the equipment investment will be unbalanced, and IT managers will have spent money on resources they can’t use.

Hyperconvergence Helps Lighten Management Burdens

The argument for hyperconvergence is similar to the argument for outsourcing: deploying virtualized infrastructure in the data center is no longer an art, and there’s little added value in organizations doing this by themselves.

The promise of hyperconvergence is that IT managers don’t need four teams (storage, server, hypervisor and network) just to keep the platform running. Instead, they have one smaller team that manages the hyperconverged infrastructure. Taking this approach enables state and local governments to save money by reducing headcount or shifting the focus to making better and more reliable applications.

While IT managers will pay for the extra software layer provided by hyperconvergence vendors, they’ll still save money because the products are based on inexpensive equipment. The hyperconvergence claim is that users will actually spend less on hardware, software and people — and have a better solution in the long run.

iStock/ThinkStockPhotos