What does Nutanix

Hyperconverged Infrastructures with Nutanix: Components and How It Works

Nutanix is ​​the leader in hyper-converged infrastructures, according to Gartner. These combine compute, storage and storage network on the nodes of a cluster. Computing power and storage scale as a complete unit, plus central management. Nutanix can be evaluated with the free CE.

Hyperconvergence offers us a number of advantages, including good planning of the required capacities, uncomplicated scalability, low latencies and simplified management (a separate storage admin is not absolutely necessary for this). The x86 virtualization serves as the foundation of such infrastructures.

Pooling of resources

Nutanix server nodes therefore consist of established x86 hardware, a hypervisor of your choice, local all-flash or hybrid storage equipment and the storage network. They can be combined to form a highly available cluster including a software-defined storage pool.

The Nutanix software runs in the so-called controller VM (CVM) on each node and takes control of distributed operations in the entire network.

My on-premises proof-of-concept in the form of a "one-node cluster configuration" is based on the Community Edition (CE) and a nested virtual environment. Productive data center clusters (the CE is only used for tests) then generally consist of at least three nodes.

Multiple hypervisors to choose from

The hypervisor from VMware (ESXi), Microsoft (Hyper-V) or Citrix (XenServer) can run on the nodes of a productive Nutanix cluster. In addition, there is the KVM hardened and adapted by Nutanix called Acropolis Hypervisor (AHV), which is part of the scope of delivery of the platform.

The hypervisor not only creates the VMs for the workloads, but also runs the controller VM. This is the core component of Nutanix that is required on every host. It provides the functions of the central management environment Prism and the Acropolis-Data plane ready.

With the help of PCI passthrough, this VM has access to the SAS HBA of the respective node with the local SSDs or HDDs. The CVMs communicate with one another via a 1, 10 or higher GbE NIC and thus form the distributed storage fabric.

Distributed Storage Fabric (DSF)

DSF is part of Acropolis and provides the mentioned distributed data level for the hypervisor. The existing physical storage, i.e. local SSDs and / or HDDs of the nodes, is combined into a pool. This is done automatically when the cluster is created. All devices of the entire network are recorded.

This pool is made available to the virtualization level via iSCSI, NFS or SMB shares. As a result, proprietary storage appliances and dedicated SANs are no longer required. As a reminder, Microsoft's Storage Spaces Direct also works on a similar principle.

The storage containers based on the pool are then created with a replication factor (RF) of 2 or 3. This means that 2 or 3 copies of the data blocks are written. 2 copies allow the failure of a node or a drive, 3 copies tolerate 2 errors at the same time.

In addition to the elementary features, the DSF level also has several advanced features. These include:

  • Proactive integrity checks for data consistency
  • Early detection and repair of corrupt data
  • Availability domains for better protection against hardware failures
  • Increased performance through tiering on SSD for hot data or HDD for cold data
  • Data Locality for storing VMs on the node where they are running
  • Automatic disk balancing
  • VM flash mode to increase IOPS through data storage in the SSD tier
  • Deduplication and inline or post-process compression
  • Erasure coding (similar to parity) with Nutanix EC-X

x86 hardware for Nutanix HCI

When it comes to hardware selection, Nutanix is ​​flexible, as not only special Nutanix appliances are permitted, but OEMs can also have a say. A distinction is made between pre-installed appliances and servers for self-installation.

The provider of preinstalled appliances is Nutanix itself with the NX series (1000, 3000, 6000 or 8000), as well as DELL EMC XC, Lenovo Converged HX or IBM Hyperconverged Systems. For the self-installation variant, Hewlett-Packard ProLiant servers, Cisco UCS or DELL PowerEdge come into question.

As an example for a pre-installed appliance and the storage design, I selected a Nutanix NX-3060-G6. This can be configured with hybrid storage consisting of SSD and HDD, all-flash SSD or all-flash SSD with NVMe. RAM values ​​range from 192 GB to 768 GB per node. Network connections can be established, for example, with 2 dual port 10 GBase-T or 1 dual port 25 GbE SFP +.

Cluster management and monitoring with Nutanix Prism

Last but not least for me is the management service Prism an outstanding part of the platform. It allows a GUI-based administration and monitoring (here: "Prism Element") among other things of the VMs, the storage, the hardware and the health status. A login to the CVM takes place via the browser, which takes you straight to the home dashboard.

In the main navigation bar you can use the drop-down to access the already mentioned levels of Health, VMs, Storage, Network, Hardware, etc. The home dashboard also shows metrics on latencies. From here, alarms can be broken down into granular form.

If you look at the CVM with PuTTY and log in with the default user nutanix and password nutanix / 4u on, then delivers a Cluster status the data on running services, including Prism. This is carried out in every CVM of the network and a Prism Leader takes over all HTTP requests. If this CVM and leader fail, a new one is determined, high availability out of the box!

Upgrades of the Nutanix software (AOS) can be installed via Prism, as well as the hypervisor and system firmware (disk firmware). The orchestration takes place automatically across all nodes. In addition to the HTML5 interface, the infrastructure can be administered with the nCLI, REST APIs or PowerShell.