Hyperconvergence (HC) is booming as a trend in data storage. According to research conducted by ComputerWeekly, 22% of IT professionals in the UK are planning to deploy HC in 2018. That’s a sharp increase – in 2017, only 9% of respondents expressed interest in hyper-converged data storage. As a simplified infrastructure, HC works great for both small and large firms, providing greater agility, reducing costs and improving scalability. How and why are hyper-converged systems disrupting the IT sector? Read on.
How does hyperconvergence differ from traditional data centre design?
As shown in the diagram above, there are a few differences between traditional and hyperconverged data storage architecture.
The most obvious one is a complete elimination of SAN. The hypervisor, server and storage are merged into a singular entity (a node). The scrapping of SANs allows for seamless virtualisation. As pointed out by David Friedlander from Panzura, ‘Traditional SANs were bad for virtualization. The storage industry is moving away from monolithic storage arrays.’ Danny Bradbury notes: ‘Because the hypervisor manages all of the hardware, it makes it easier to scale a virtualized infrastructure quickly without worrying about the complexity of the SAN or the latency that it introduces.’
Moreover, hyper-converged infrastructure relies on locally attached disk drives. Alex Barrett notes: that ‘it eliminates the cost and latency overhead associated with accessing storage over a network.’ This, in turn, accelerates the speed of connections and proves more efficient than SANs.
Furthermore, as noted by Mindsight, ‘hyperconvergence is sometimes described as the software defined data center (SDDC), because a hyperconverged infrastructure is managed centrally by a single piece of software.’ All the operations and admin tasks are managed with the same software.
Reasons for the increased demand for HCI
These are just some of the tangible benefits that hyperconvergence provides:
- Greater Agility: Mindisght stresses out that in HCIs,’ all workloads fall under the same administrative umbrella.’ This, in turn, means that the workload can be migrated quicker than in the traditional system. Liviu Arsede emphasized that ‘Having everything in a single environment that can support VMs and policy management across the entire infrastructure makes the SDDC a highly agile environment.’
- Better scalability: as you can see on the diagrams, the way that HCIs are built using nodes means that it is much easier to scale up the capacity of the system. All you need to do is add more nodes. It is also important to emphasize that HCI providers do prevent assymetrical growth by scaling storage and compute together symmetrically.
- Improved data protection: as indicated by Hyperconverged, ‘Hyperconvergence software is designed to anticipate and handle the fact that hardware will eventually fail.’ Snapshotting and data deduplication are just some of the features that are built in the HC systems by default to protect data. On the other hand, traditional data storage makes data protection difficult logistically and expensive.
- Lower costs: HCI saves IT departments money by utilising less equipment and, therefore, cutting the maintenance and support costs.
As a result, HCI is one of the main disruptors in the IT space. Organisations are moving away from viewing IT in terms of separate servers and realising the benefits of shared and merged resources.
The demand for simplification of the data storage systems motivated IT giants to step into the world of hyperconvergence. Cisco, for instance, acquired Springpath in 2017. The acquisition was the result of a massive success of Hyperflex – the industry’s first fully integrated hyperconvergence infrastructure system, developed in joint efforts of Cisco and Springpath in 2016.
Top Hyperconvergence Providers:
- HPE Simplivity
- Atlantis Computing