What differentiates converged and hyper-converged infrastructure? We take a look at several systems in both spaces.
Building a converged infrastructure in a corporate environment means more than just replacing a few network devices. It requires an entirely different way of looking at a company’s network infrastructure and the kinds of IT staff needed to support it.
Traditional IT infrastructures were made up of the proverbial technology silos. They included experts in networking, storage, systems administration, and software. But much of that has changed over the past decade or so as virtualization has become a prominent technology tying networks and servers together.
Today’s virtual environments can be likened to the ubiquitous smartphone. Smartphone users generally don’t concern themselves with issues such as storage or systems management; everything they need is just an app. Similarly, storage management on converged infrastructure systems, such as EMC’s VMAX family, are provisioned essentially as an app as well, says Colin Gallagher, director of product marketing for the VMAX family.
Generally speaking, there are two approaches companies can take to building a converged infrastructure:
- The hardware-focused, building-block approach of VCE (a joint venture of EMC, Cisco, and VMware), simply known as converged infrastructure;
- The software defined approach of Nutanix, VMware, and others called hyper-converged infrastructure.
The most important difference between the two technologies is that in a converged infrastructure, each of the components in the building block is a discrete component that can be used for its intended purpose — the server can be separated and used as a server, just as the storage can be separated and used as functional storage. In a hyper-converged infrastructure, the technology is software defined, so that the technology is, in essence, all integrated and cannot be broken out into separate components.
Hyper-converged infrastructure: main differentiators
Let’s say a company is implementing server or desktop virtualization. In a non-converged architecture, physical servers run a virtualization hypervisor, which then manages each of the virtual machines (VMs) created on that server. The data storage for those physical and virtual machines is provided by direct attached storage (DAS), network attached storage (NAS) or a storage area network (SAN).
In a converged architecture, the storage is attached directly to the physical servers. Flash storage generally is used for high-performance applications and for caching storage from the attached disk-based storage.
The hyper-converged infrastructure has the storage controller function running as a service on each node in the cluster to improve scalability and resilience. Even VMware is getting into the act. The company’s new reference architecture, called EVO (previously known as Project Mystic or Marvin) is a hyper-converged offering designed to compete with companies such as Nutanix, SimpliVity or NIMBOXX. The two systems, EVO:RAIL and EVO:RACK, were announced at VMworld 2014 in August. Previously, VMware was active only in the converged infrastructure market with the VCE partnership.
Using Nutanix as an example, the storage logic controller, which normally is part of SAN hardware, becomes a software service attached to each VM at the hypervisor level. The software defined storage takes all of the local storage across the cluster and configures it as a single storage pool. Data that needs to be kept local for the fastest response could be stored locally, while data that is used less frequently can be stored on one of the servers that might have spare capacity.
Hyper-converged infrastructure costs
Like traditional infrastructures, the cost of a hyper-converged infrastructure can vary dramatically depending on the underlying hypervisor. An infrastructure built on VMware’s vSphere or Microsoft’s Hyper-V can have fairly costly licensing built in. Nutanix, which supports Hyper-V, also supports the free, open source KVM, the default hypervisor in OpenStack cloud software. However, as with any open source application, “free” can be a relative term, since there are other costs involved in configuring the software for use in a given environment.
Because the storage controller is a software service, there is no need for the expensive SAN or NAS hardware in the hyper-converged infrastructure, the company says. The hypervisor communicates to the software from the Nutanix software vendor in the same manner as it did to the SAN or NAS so there is no reconfiguring of the storage, the company says. However, the Nutanix software eliminates the need for the IT team to configure Logical Unit Numbers (LUNs), volumes or read groups, simplifying the storage management function.
Niel Miles, software defined data center solutions manager at Hewlett-Packard, described “software defined” as programmatic controls of the corporate infrastructure as a company moves forward, while speaking at the HP Discover 2014 conference in Las Vegas earlier this year. He said this approach adds “business agility,” noting that it increases the company’s ability to address automation, orchestration, and control more quickly and effectively. Existing technology cannot keep up with these changes, requiring the additional software layer to respond more quickly than was possible in the past.
For those looking to reuse their existing hardware to take advantage of a hyper-converged infrastructure, several companies offer approaches more similiar to the converged infrastructure approach of discrete server, storage and network devices, but with software defined technology added to improve performance and capabilities.
One such company is Atlantis of Mountain View, CA, which offers software defined storage system that can convert direct-attached storage (DAS) into a pooled array, increasing the number of VMs that can share the storage and effectively creating a hyper-converged infrastructure. The technological secret sauce is Atlantis USX, a software platform that resides between the VM and storage infrastructures, the company says.
In August, Sunnyvale CA-based Maxta introduced the MaxDeploy Hyperconverged Reference Architecture built on Intel server boards and systems. MaxDeploy pre-validations include testing with server-side flash technology and magnetic disk drives to support a spectrum of cost / performance options, the company says. Maxta’s VM-centric offering simplified IT management and reduces storage administration by enabling customers to manage VMs and not storage.
Generally speaking, the investment in a converged infrastructure system will be made in conjunction with a greenfield project rather than a forklift upgrade, says Todd Pavone, executive VP of product development and strategy for VCE. Companies that consider a converged infrastructure will know already that they need to expand their computing environment so a pilot project with a converged infrastructure system will be cost-effective. Rolling out x86-based servers in a building-block chassis permits the company to test the new environment with new hardware that would have already been budgeted for expansion.
From a CapEx perspective, Pavone says, hardware is essentially neutral. The downstream savings is from lower support and maintenance costs.
New investments for a hyper-converged infrastructure differs because the hardware cannot be decoupled should the pilot program prove unsuccessful. Because the software is a key component to a hyper-converged infrastructure, initial entry costs could be higher, Pavone says.
But Duncan Epping from VMware’s EVO:RAIL (aka MARVIN) team suggests that upfront costs for converged systems need to be taken into account when considering upgrading a company’s infrastructure. For hyper-converged systems, he says integration with existing infrastructure and figuring out how to manage different platform needs to be included in the financial considerations as well.
While many of the vendors that provide components and systems in the converged market are established vendors, EMC, Cisco, NetApp, Hewlett-Packard, for example, Epping says, “Some vendors in the hyper-converged space are relatively new; can you trust them with your mission-critical workloads?”
Not all of the companies offering hyper-converged offerings are new, however. Among the established IT vendors with hyper-converged products are the aforementioned EMC, Dell, Nutanix and Epping’s own VMware. Though VMware is a bit unique, as Epping explains, “EVO:RAIL is not a pure VMware offering, it is a partner program that enables customers to select a hyper-converged offering from their preferred vendor.”
Converged infrastructure: main differentiators
There are two approaches to building a converged infrastructure, explains Bharat Badrinath, senior vice president of solutions marketing at EMC. The first is using the building-block approach, such as that used in the VCE Vblock environment, where fully configured systems — including servers, storage, networking and virtualization software — are installed in a large chassis as a single building block. The infrastructure is expanded by adding additional building blocks.
While one of the main arguments in favor of a converged infrastructure is that it comes pre-configured and simply snaps into place, that also is one of the key arguments against this building-block technology approach as well. As Chris Ward noted in his Tom’s IT Pro article To Converge Infrastructure or not, That is the Question, because all the parts are pre-configured, the users of the products have a predefined configuration. If the IT manager wants a configuration that is different from the system a provider offers, they are essentially out of luck.
The same holds true for the components themselves. Because each component is selected and configured by the vendor, the user does not have the option to choose a router or storage array customized for them. Also, the building-block approach ties the user in to updating patches on the vendor’s timetable, rather than the user’s. Patches must be updated in the pre-configured systems in order to maintain support.
It is possible to build a converged infrastructure without using the building block approach. The second approach is using a reference architecture, such as the one dubbed VSPEX by EMC, which allows the company to use existing hardware, such as a conforming router, storage array or server, to build the equivalent of a pre-configured Vblock system.
Converged infrastructure costs
As noted, each building block consists of separate hardware that is prepackaged and tested to work almost as a plug-and-play module. Unlike the hyper-converged infrastructure, the separate components of the converged infrastructure can be decoupled from the rest of the components and used in a standalone environment, Bardinath says. The simplicity of simply adding a fully configured and tested infrastructure block makes it easier to expand and maintain the network without needing to spend a lot of time reconfiguring the various components, he says. The blocks effectively snap together similar to the colorful Lego-brand building blocks found in a child’s toy box.
Pricing for converged infrastructure building blocks will vary by vendor, of course, but Unisys provides the following comparison: the base price for a Unisys Forward! system starts at $89,000. By contrast, a customer buying the same equipment and software a la carte would pay in excess of $100,000.
Companies that plan to migrate to the VSPEX reference design and use their existing server, storage and network hardware can work with a reseller or use a do-it-yourself approach to configure their existing hardware to meet the VSPEX design, he says. Such an approach would permit a company with a more modern network to migrate to the converged infrastructure at a lower cost. However, he says, most companies tend to deploy converged infrastructures in pilot projects, such as migrating from Microsoft Exchange 2010 to 2014, or in new data centers to reduce the hardware expense.
One advantage of the converged infrastructure is lower support and maintenance costs, he says. Data centers with hardware from a variety of vendors can run into finger-pointing problems when hardware issues arise. A Vblock is supported by a single vendor that takes responsibility for all of the internal components, regardless of the manufacturer.
“Technology is the easy part,” Badrinath says. “The people part is much more tricky.” Finding qualified engineers that can work on the wide variety of hardware from various vendors found in many data centers can be a real challenge, he notes.