The term “software-defined storage,” like all new buzzwords, has many meanings, generates loads of media attention and creates even more market confusion. Falling into the categories of virtualization, orchestration and software-defined networking, SDS is ultimately about simpler storage management in highly virtualized data centers.
As organizations of all sizes accumulate storage arrays, it’s no longer possible to talk about “the SAN” because no one has just one storage area network anymore. Even in smaller data centers, it’s common to find two or three generations of SAN technology, with multiple devices in each generation. This creates a management nightmare and impedes application deployment and migration.
The aim of SDS is to solve these challenges by focusing on transforming diverse storage elements into a cohesive whole, all with the help of software. In some cases, this means simply pulling together traditional stand-alone storage components into a single SAN. At the other end of the spectrum, it means placing a whole virtualization layer on top of existing SANs. No matter how you define SDS, it generally results in one unified management interface versus many different ones.
Fact or Fallacy? SDS is the same as storage virtualization.
Fallacy: SDS is related, but takes virtualization one step further.
The original idea behind the storage area network was storage virtualization, providing management of a pool of disks, redundancy and access to chunks of it for an organization’s virtual (and physical) servers. The problem is that SANs have traditionally been tightly bound to their own hardware: redundant controllers, shelves of drives and whatever features (such as replication or deduplication) the vendor could pack in. Upgrades can be expensive — or impossible — and when new capabilities are needed, the options to add them might not be available at all.
With SDS, storage vendors offer an additional management layer on top of the storage architecture, which provides a set of upgradeable services and makes use of whatever SANs are available, not necessarily ones that run on the same platform or are made by the same manufacturer. By pooling storage and virtualizing services, SDS promises to reduce some of the chaos of the ever-expanding SAN footprint in data centers.
Fact or Fallacy: SDS is only useful in large data centers.
Fact: The bigger the data center, the more interesting SDS is.
Small organizations with two or three SANs won’t get much out of SDS. The technology’s benefits start with the addition of cross-SAN storage capabilities that might not be present in a single device or small number of devices. For example, vendors discourage replication between diverse storage platforms, citing concerns about performance, reliability and scalability. But SDS can override those concerns and allow replication between previously incompatible storage devices — a cost-effective solution in situations where, say, a data center manager doesn’t have the budget to buy two matched arrays or wants to replicate to a cloud-based data storage service.
Storage tiering, for example, is easier in SDS because a centralized management layer can both see the application and access a wide variety of storage subsystems. SDS can also bring capabilities that might not be present in an older or low-end SAN, such as snapshots, thin provisioning or protocol translation between SAN and network-attached storage (NAS).
Fact or Fallacy: SDS is just a startup thing.
Fallacy: Everyone, even the old guard, is getting in on the act.
SDS isn’t just a passing fad. Topline SAN hardware vendors are all investing heavily in the technology. For example, EMC is entering the SDS world by leveraging its considerable storage expertise: Its ViPR product builds on both EMC and non-EMC SANs (as well as other storage devices) to present a sophisticated virtual service and unified management interface.