Deploying Cisco HyperFlex NVMe into UCS Manager with Existing HyperFlex cluster
In this ATC Insight
Summary
As NVMe solutions continue to gain adoption by our customers, one specific scenario will be inevitable with our Cisco HCI customers. This scenario is the ability to add or scale up within a production HCI environment with a new Cisco HyperFlex NVMe Cluster.
This ATC Insight explored the testing we did around the deployment of a second Cisco HyperFlex cluster (that was NVMe) to a single UCS Domain which already contained an active Cisco HyperFlex All-Flash cluster. We wanted to determine the complexities encountered during installation.
Note: Each HyperFlex cluster was installed to separate vCenter instances for segmentation of virtual resources.
ATC Insight
We were able to successfully deploy the HyperFlex NVMe cluster to the same UCS Domain successfully, but there were recommendations from Cisco that should be followed. They are listed below:
- Each HX cluster will be created in a unique sub-org as part of installation. Do not modify this hierarchy as it ensures unique policies are created per cluster.
- Each cluster should use a unique storage data VLAN to keep all storage traffic isolated. Reuse of this VLAN across multiple clusters is highly discouraged.
- When reusing VLANs (e.g. management and guest traffic VLANs), create a new unique VLAN name for each VLAN even if it already exists in UCS Manager. This will ensure no disruption to other clusters and servers in that domain occur.
Additionally, another tip that we found that is very important is to use descriptive and consistent VLAN names when deploying the cluster into a UCS Domain that already contains a current cluster or clusters.
Why? Well below are the default VLAN names:
- hx-inband-mgmt
- hx-storage-data
- hx-vmotion
- vm-network
Using the default VLAN names works when there is a single HX deployment in the UCS domain. However, when multiple HyperFlex clusters are deployed, it can be confusing to know which cluster is using the default names. Ideally, it would be best to customize the VLAN names for each cluster installation.
A tactical example of why you would customize VLAN names for each cluster installation is if the need arises to redeploy HyperFlex. In this scenario, the VLANs have to be manually deleted. Descriptive names make this much easier to identify the correct VLANs marked for deletion that are associated with the correct Hyperflex cluster. Without these custom descriptions it makes it very difficult to know you are deleting the right VLANs.
In this instance, we chose to use the following names for the NVMe HyperFlex installation to make identification easy.
- hx-nvme-inband-mgmt
- hx-nvme-storage-data
- hx-nvme-vmotion
- vm-nvme-network
Expectations
Based on the high level design and following the Cisco recommendations, we expected there would be no issues with installing a HyperFlex cluster to a UCS domain that already contained one active HyperFlex cluster.
Note: Each HyperFlex cluster was installed to separate vCenter instances for segmentation of virtual resources.
Technologies Under Test
Cisco HyperFlex 4.0(1b)
HyperFlex is the hyper-converged, software-defined storage (SDS) product developed by Cisco. It takes advantage of direct-attached storage devices connected to Cisco HX Series servers.
Fabric Interconnects:
- Cisco UCS 6332
- UCS Manager 4.0(4d)
Existing HyperFlex cluster
- 4 Node All-Flash HXAF220C-M5SX
New HyperFlex cluster
- 4 Node NVMe HXAF220c-M5SN
Documentation
Cisco HyperFlex Step Through Process
Prerequisites and Logical Design Information
- Each server has a UCS VIC 1387 connected to both Fabric Interconnects with 40GB Twinax.
- Mgmt, vMotion, and Storage will be segmented into separate VLANs that are not used across HyperFlex clusters. Out-of-band hardware management will use the same VLAN across clusters.
- VLAN and IP addressing information
Mgmt 788
xxx.xxx.37.6 - 9 - ESXi (vmkernel for mgmt)
xxx.xxx.37.10 - 13 - HX Storage Controller (SCVM)
xxx.xxx.37.14 - HX Cluster
vMotion 789
xxx.xxx.37.38-41
Storage 790
xxx.xxx.37.70-73 - ESXi (vmkernel for NFS storage)
xxx.xxx.37.74-77 - HX Storage Controller (SCVM)
xxx.xxx.37.78 - HX Cluster (NFS storage target)
Out-of-band Mgmt 2351
xxx.xxx235.77-80 - CIMC
- The management traffic network handles both hypervisor and storage cluster management traffic. The xxx.xxx.37.14 HX cluster address is the GUI interface that is used to interact with HyperFlex. The IP address is automatically moved between the virtual appliances, depending on which is currently the cluster leader.
- The storage network handles the hypervisor and storage data traffic. All of the data traffic between the controllers is carried across this network, along with the storage traffic to the ESXi hosts. Data traffic between ESXi and HyperFlex is NFS. The HX Cluster IP address is attached to ESXi as an NFS server.
- The first is that vCenter and HyperFlex Installer have already been deployed. The second is that the KVM IP pool for out-of-band management has been created in UCS Manager.
Step By Step
After logging into HyperFlex Installer with a web browser, click the Create Cluster drop drown and select Standard Cluster
On the next screen, there are two options. You can install HyperFlex from scratch or upload an existing configuration file. In this instance, we will be installing from scratch.
The installer will log into UCS Manager and discover the available server. Since this is a shared UCS environment, it's important to select the unassociated servers. The associated servers are the existing cluster.
After clicking Continue, it's time to configure the VLANs for HyperFlex connectivity inside of UCS Manager. Note that the names are being customized from defaults to make them easy to identify with the NVMe cluster. The VLAN IDs discussed earlier should be entered, as well as the out-of-band IP addresses. The critical part is to configure the MAC Pool Prefix properly to avoid conflicts with other UCS equipment in the data center. In this case, 66 was chosen as the prefix. Last, a Cluster Name and Org Name must be selected and they cannot currently exist in UCS Manager.
Next up is to enter the network settings and hostnames for ESXi. One great feature is the checkbox to make IP addresses and hostnames sequential. After entering information for the first server, it will auto populate the others. At the bottom of the screen (not shown), the root username and password are configured.
On the next screen, enter all of the IP addresses for the HyperFlex storage controllers and the ESXi vmkernel for NFS traffic. Make sure to enter the IP addresses under the correct VLAN heading.
On the next screen, enter the cluster name for the HX Cluster inside of HyperFlex and choose the replication factor. Under vCenter Configuration, enter the Datacenter and Cluster names that will be created inside of vCenter. Last, enter the DNS server, NTP server, and DNS domain name and click start.
At this point, HyperFlex installer will validate the settings and alert to misconfigurations that are found. During the first install, we used the default VLAN names (which were already in UCS from the all-flash installation). The installer allowed us to go back and correct the settings.
After validation is complete, the UCS Manager configuration starts.
The installer will continue until the end and open the interface for HyperFlex Connect. Log in with administrator@vsphere.local.
After logging in, we get our first look at the HyperFlex Connect interface. We can see that the cluster is online, healthy, and can tolerate one node failure.
Logging into vCenter, we can see that HyperFlex installer has automatically created the data center and cluster with the names that were given. Additionally, the hosts have been added along with the storage controllers that were created during the installation.
The installation is complete, but a shared NFS datastore hasn't been created for our VMs. To create a datastore, we go back to HyperFlex Connect. Under Manage, click Datastores. Here we can see all HyperFlex datastores. (This screenshot was created after an initial datastore was created)
After clicking Create Datastore, the following screen will appear asking for a datastore name, size, and block size. Note that the size of the datastore size can be increased/decreased later, but the block size is fixed.
After clicking the blue Create Datastore button, HyperFlex will start the process of provisioning the NFS datastore and refresh the list of datastores.
After the datastore is created, it takes us back to the datastore summary. Notice that the datastores show the status as mounted.
In vCenter, HyperFlex has automatically mounted the new datastore to the hosts in our cluster and it's ready to be used.
This concludes the walk through of the Cisco HyperFlex installation. We hope that you have found this information helpful.