fbpx
Features Hub Opinion

5 myths about hyperconverged infrastructure debunked

Tue 15 Oct 2019 | Alan Conboy

Alan Conboy busts the common misconceptions surrounding hyperconverged infrastructure

Hyperconverged infrastructure (HCI) has gone mainstream, yet myths still remain that lead to misconception and confusion even among those that already have various HCI solutions deployed. These are five of the most prevalent myths debunked.

Myth #1 – HCI is more expensive than building your own virtualisation infrastructure

First of all, the acquisition price of HCI solution varies by vendor and often by the brand of hypervisor used in the solution. Secondly, while it can often be the case that purchasing the individual components needed to create a virtualisation infrastructure may be less expensive than purchasing an HCI solution, that is only part of the cost of the solution. The true and total cost of infrastructure goes far beyond the initial purchase.

The most compelling virtue of HCI solutions is that they make virtualisation easier to deploy, manage, and grow in the future. That simplicity and ease of use translate into a dramatically lower total cost of ownership over time. From deploying in hours rather than days, to scaling out seamlessly without downtime, HCI eliminates many of the major headaches that come with traditional DIY virtualisation solutions.

HCI uses automation and machine intelligence to handle a lot of the daily tasks typically associated with managing and maintaining virtualisation infrastructure. This ease of use and reduction of management time frees up resources to work on other tasks and projects. The savings can also include eliminating hypervisor software licensing, depending on the hypervisor deployed by or supported by the HCI vendor. The savings may vary by organisation, but nearly always, the numbers bear out that the good HCI solutions are less costly over a three to five year period or less. Total cost of ownership could be discussed in far more detail, but the next myth awaits.

Myth #2 – HCI leaves 30 percent fewer resources available right out of the box

This is definitely a myth where it depends on the vendor solution. There are a number of factors that can affect the available resources in an HCI solution, and the biggest is probably VSA-based storage architecture. Virtual storage appliances (VSAs) are used to emulate SAN or NAS storage in order to support traditional third-party hypervisors that were designed to consume SAN and NAS storage. These VSAs can be very resource-intensive and are required on each node of an HCI cluster.

VSAs primarily consume RAM, with many consuming 24-32GB or more of RAM per node as well as multiple cores per node. This can be a significant percentage of the server/appliance RAM and that is in addition to the RAM consumed by the hypervisor. Many VSA-based HCI solutions also require SSD storage to be used as a cache because of their inefficient data pathing that contributes to the RAM consumption.

Another resource guzzler is the three-factor replication required to support very large clusters to overcome the probability of two drive failures corrupting data. Three-factor replication means each block is written to three separate HCI nodes, for three copies of the data plus parity overhead, consuming a large amount of disk space. These solutions try to overcome this loss of usable storage through deduplication, freeing up more of the usable storage to fit more blocks.

Not all HCI solutions use VSAs or three-factor replication, however. Some use truly integrated hypervisors that allow for more direct storage paths without VSAs. The resources available on these HCI solutions with integrated hypervisors is much higher, or as much as you would expect on a highly available virtualisation infrastructure. As for three or more factor replication, that should only be required for very large clusters with a significant trade off of consumption for reliability and significant loss of efficiency.

Myth #3 – HCI is just software-defined storage (SDS) with a hypervisor installed

Since hyperconvergence hit the mainstream, it’s true that SDS vendors have been coming out of the woodwork with questionable HCI solutions that meet the criteria for this myth. However, the real HCI solutions may include SDS but they are much, much more.

“The savings can also include eliminating hypervisor software licensing, depending on the hypervisor deployed by or supported by the HCI vendor”

The hallmark of HCI solutions is that they simplify virtualisation. SDS solutions may help to simplify storage to some extent, but often they aren’t much more than an emulated SAN/NAS solution. As discussed in the previous myth, many SDS solutions use VSAs to emulate SAN for the hypervisors they support. This ultimately makes SDS solutions very similar to a SAN in the overall complexity of the solution, defeating the task of making it simpler.

Real HCI solutions automate many of the configuration and management tasks that have made traditional DIY virtualisation so complex and difficult to manage. That is why many HCI solutions are delivered as purpose-built appliances where the knowledge of the hardware supports even greater automation. Automation such as automatic storage pooling, rolling updates, and self-healing go far beyond the simpler SDS solutions.

The best HCI solutions also directly integrate hypervisors rather than using third-party hypervisors. This level of integration allows for more efficient data pathing and resource utilisation. SDS solutions were already supporting third-party hypervisors long before the term HCI was even invented and that simply doesn’t make the grade for HCI.

Myth #4 – HCI can’t cover the spectrum from enterprise to edge computing

Many HCI vendors went straight at enterprise computing out of the gate. The enterprise market is definitely the one in which to make a lot of noise and be noticed, for better or for worse. However, the rise of edge computing has put a greater spotlight on HCI as a vehicle for edge infrastructure. Some, but not all, HCI vendors have the right architecture to answer the call of edge computing.

As mentioned earlier, VSA-based HCI solutions can consume large amounts of resources, making it nearly impossible to use on the smaller form factor appliances needed for edge computing use cases. With edge computing, the cost is key, and requiring resource-rich appliances to run the storage and hypervisor will increase the cost of the solution at each edge site.

Imagine the desire to install HCI on appliances with a small resource footprint even up to 64GB of RAM, and using a VSA-based solution that is going to consume half that RAM per node. This is simply not cost-effective. HCI solutions with hypervisor-embedded storage, rather than VSAs, use fewer resources and can install and run on smaller appliances efficiently, making edge computing a cost-effective reality.

Myth #5 – A single-vendor solution like HCI is a bad idea

Like the old adage ‘don’t put all your eggs in one basket’, some don’t like the idea of having their entire infrastructure stack come from the same vendor. They might want to diversify their infrastructure portfolio among many vendors, presumably to avoid that single vendor not living up to its promises. While managing risk is important in running any organisation, perhaps business leaders have not fully thought through the risk vs. reward.

One of the reasons HCI came to be was to overcome a number of challenges facing traditional virtualisation infrastructure that are largely caused by combining multiple vendors solutions into a single stack. The most egregious of these challenges, or at least the one felt most personally by IT pros, can be the finger-pointing between vendors when a customer calls for support. Vendors may spend days or longer debating who owns the problem while you, the customer, are left without resolution.

Another huge benefit of a single vendor owning the whole stack is the increased levels of integration and automation that can be achieved. This can be seen especially clearly in HCI solutions that use third-party hypervisors where, when system updates need to be performed, they must be done separately for the hypervisor from the rest of the system. Handling these system updates separately is never ideal because any one vendor’s updates have the potential to cause issues with other vendor solutions. This is why system updates across a multi-vendor solution have historically been arduous tasks usually performed over long nights and weekends.

Despite all of these myths, the truth is that a properly integrated HCI solution really will enable IT administrators to be able to focus on apps and workloads, rather than leaving them bogged down with managing infrastructure all day.

Experts featured:

Alan Conboy

Office of the CTO
Scale Computing

Tags:

hci hyperconverged infrastructure scale computing
Send us a correction Send us a news tip

Related Opinions




Do NOT follow this link or you will be banned from the site!