What is Hyper-convergence?

Hyperconvergence – Buzz or bust? It seems like a new buzzword or acronym is developed in the IT industry daily. As a small business, it can be difficult to cut through the noise and identify what technology is the right fit for you.

Most recently, the term “hyper-convergence,” or hyper-scale computing, has made its debut. This technology offers some very real and very exciting benefits.

This post intends to provide smaller organizations with information on how to use a data-driven approach to decide if this technology is right for them, as well as how it differs from existing virtualized infrastructure.

We’ve been developing virtualized and hyperconverged data center solutions for many years! One of the first use-cases for hyperconverged infrastructure was made in 2014 in one of our VDI seminars.

Hyper-Convergence Explained

Largely gone are the days of physical server-to-application pairing.

Today, most workloads are delivered by virtual servers that reside within a virtual data center. These virtual data centers are, by and large, provisioned as a private cloud, meaning that each organization owns (or directly controls) the hardware that provides the compute, network, and storage resources it needs.

This architecture is commonly referred to as a 3-tier architecture – Compute, Storage, Network.

The explosion of virtualized server workloads — such as databases, network services, collaboration applications, and unified communications — has given rise to the need for a more intelligent approach to infrastructure management. With hundreds of virtualized applications running in a typical datacenter, IT infrastructure requires alignment with the virtualization stack.

Traditional 3-tier architectures result in inefficiencies in provisioning, silos between IT and business units, and inability to scale globally as business continues to grow.

Hyperconvergence exists in order to solve many of these inefficiencies by leveraging technology that was originally developed for web-scale companies such as Amazon and Google. Hyperconverged systems offer an integrated system that combines compute and storage resources into a 100% software-defined solution. This is typically deployed in an appliance model: as an organization needs additional computer and/or storage resources, the virtual data center can be linearly scaled by simply adding an additional appliance.

These appliances integrate leading software-deployed features that are found in large enterprises — such as intelligent data tiering, high-performance flash storage, and dense hard disk drives — for high performance and low latencies across a large storage capacity. Such features are typically universal across every appliance and every data center, resulting in a uniform parity of features.

In most smaller organizations this is demonstrated through a uniform set of features available at all levels and functions of the virtual data center. One such example would be the ability to have the same features among production, development, and disaster recovery virtual data centers. In addition to providing a parity of features across an entire enterprise, these appliances typically offer management of all functions in all data centers from a single pane of glass. For distributed organizations with multiple data centers, or even smaller organizations with distributed IT resources across offices, this can be a huge time saver!

As the “software-defined” name implies, most hyperconverged platforms are self-healing and can be easily upgraded through a process that commonly doesn’t involve hardware changes. These features eliminate the operational complexity that is typical of server virtualization environments with 3-tier infrastructure, as well as the associated siloed feature management and upgrade lifecycle of these systems.

With such an ease of management, parity of features, and ultra-simple upgradeability, why would an organization not move to a hyperconverged platform? The answer is in the workload.

[dt_button link=”https://www.corp-infotech.com/contact-us/” target_blank=”false” button_alignment=”default” animation=”fadeIn” size=”large” style=”link” text_color_style=”context” text_hover_color_style=”accent” icon=”fa fa-chevron-circle-right” icon_align=”left”]Structure Your Business for the Future — Contact Us[/dt_button]

Workload analysis and establishing the ‘right fit’

Hyperconvergence offer scale-out simplicity and in many workloads can provide an organization with a reduction in the Total Cost of Ownership (TCO) of a virtual data center. There are several workloads that specifically lend themselves to a hyperconverged infrastructure.

These include:
Virtual Desktop Infrastructure – a large number of relatively small virtual machines that require scale-out capabilities (possibly even of an automated nature) but a relatively small storage footprint.
Web-server or application-server farms – compute-dense servers that require advanced storage features and high-uptime.
Highly dispersed applications – applications that utilize multiple tiers of data processing, analytics, and/or database storage.
This list is by no means exhaustive, but is typical of highly successful deployments of hyperconverged infrastructure.

The inverse of this argument comes with workloads that are specifically not well suited for a hyperconverged environment. These include:
Large volume file storage – DFS, AFS, or Cell-based storage arrays
Highly transactional large RDBMS
Highly compute-intensive, fault-tolerant, thread-context aware applications

These workloads often demonstrate poor performance or are disproportionately expensive to deploy on a hyperconverged virtual data center.

The take away

Properly architecture of a hyperconverged virtual data center requires a comprehensive understanding of the ultimate workloads intended to run within the environment. Integrating existing storage, compute, and network into the ultimate solution can drive down costs and improve adoption, given the right mix of available (existing) technologies. Hyperconvergence is not a one-size-fits-all technology.

It’s easy to “rip-and-replace” existing infrastructure, although this is often not the most cost effective or most optimal method to deploy a new hyperconverged infrastructure. Rather, integration and distribution of workloads across existing underlying virtual infrastructure often provides for the most optimal hyperconvergence deployment methodology. This platform can then deliver the maximum amount of return – of both investment and native management efficiencies – through the optimal placement of workloads. The benefits and cost-advantages of a hyperconverged data center can quickly be leveraged and realized when this method is employed. Long-term continued realization of these benefits can be largely dependent on planning and workload deployment modeling during the Proof of Concept phase of the project.

Planning really is that important! Don’t go it alone! Involve a partner that has navigated such a project and can take the time to understand the intricacies of your specific business use case and environment.