Azure Stack HCI Deployment

Since Microsoft announced the release of Azure Stack HCI in 2020 I’ve been eager to get hold of a couple of nodes.

Now you may be wondering. hasn’t Azure Stack HCI been around since 2019? Well yes, sort of. In 2019 Azure stack HCI was still running Windows Server 2019 at its core. The latest 2020 release includes a new OS which can be found here.

So, what is Azure Stack HCI? Its a hyperconverged offering from Microsoft and its OEM partners, which allows you to run your servers in your own datacentres, but also extend your datacentre into Azure with features such as:

  • Azure Backup
  • Azure Policy
  • Azure Site Recovery
  • Azure Monitor
  • Azure Security Centre
  • Azure Automation
  • Azure Support
  • Azure Network Adapter

For companies who are thinking of moving into Azure but are still a little nervous and still want to use their current management tools such as Windows Admin Centre then this is a great first step.

Switchless Storage

One of the things I like about the Azure Stack HCI deployment is the various configuration options. For larger organizations you can have management over 1GbE, VM traffic over dedicated 10GbE and storage over its own dedicated 10GbE ports, however for smaller environments, up to 4 nodes in a cluster, it can be configured to utilize the 1GbE ports for management and VM traffic, and storage can still utilize the 10GbE ports by direct connection.

In an ideal world you would want VM traffic over the 10GbE ports especially in larger environments, however in smaller environments you could get away with the 1GbE ports.

As part of a recent 2 node cluster deployment I had 2 x Dell AX640 nodes, with an additional dual port Qlogic FastLinQ 41262 card in each node directly connected to each other for storage traffic. Managment and VM traffic were configured on the 1GbE ports.

One important thing to note, if you are going to be using management and VM traffic over the same NIC’s is to ensure the native trunk port is the management VLAN.

Operating System and Drivers

The nodes should be shipped with the OS and drivers pre installed, however if they aren’t, or you need to reinstall the OS, you can download the OS here. You will also have to download the drivers from the hardware manufacturer, make sure you download the drivers for Azure Stack HCI OS, as these may be different than the Windows Server 2019 drivers.

I wont go through the OS or driver installation as these are pretty standard for any server, for the Dell I mounted the OS via the IDRAC, completed the installation, then mounted the driver ISO, and installed it via PowerShell.

Deployment Requirements

For the deployment there are a few things worth noting.

  • Management IP’s for the nodes.
  • DNS IP addresses
  • Windows Admin Center installed on a server
  • Azure subscription to register the Azure HCI Stack
  • Cluster witness for 2 node cluster, we will use an Azure storage account for the cluster witness.

Deployment Guide

For the actual deployment I followed Microsoft deployment guide which can be found here. There are a couple of things worth pointing out.

If you are utilizing the direct connect method as above, when you get to the virtual switch creation in section 7 of the guide, select one virtual switch for compute and storage.

When defining the network in section 2.5, you should receive a request to enable Credential Security Service Provider (CredSSP), if you select no, or don’t receive the request then this section will fail.

CredSSP troubleshooting steps can be found here.

The SDN section optional depending on your requirement, once the initial setup is compete you should get the following confirmation.

Connecting Azure Stack HCI to Azure

In order to register the cluster with Azure, you will first need to register Windows Admin Center with Azure. Once complete you can register the cluster:

Open Windows Admin Center and select Settings from the very bottom of the Tools menu at the left. Then select Azure Stack HCI registration from the bottom of the Settings menu. If your cluster has not yet been registered with Azure, then Registration status will say Not registered. Click the Register button to proceed. You can also select Register this cluster from the Windows Admin Center dashboard.

Additional information on registering the cluster with Azure can be found here.

Additional Information

In order to run Azure stack HCI there are a few bits of information which are worth knowing.

  • Unlike in Azure you will need to provide all OS licenses for your workloads.
  • Each server in the cluster is required to connect back to Azure endpoints at least one every 30 days.
  • In order to utilize the Azure features, Microsoft do charge you £8/physical core per month (UK Regions), additional information can be found here
  • The current version of Azure Stack HCI is 20H2. Public preview of 21H2 is available.
  • Azure Arc integration is currently only available on version 21H2
  • Azure Stack HCI documentation can be found here

Overall Impressions

As mentioned previously I do like Azure Stack HCI, how easy it is to setup and integrate with Azure. If you are looking to build an hybrid cloud environment then its certainly worth a shout.

Nutanix Cluster Deployment

I’ve deployed quite a few Nutanix solutions over the last few years. Each deployment is slightly different, whether its been installing it with AVH, ESXi, installing directly with customer switches or using an unmanaged switch for the initial deployment.

I thought I would go through the installation process using an unmanaged switch.

Before deployment check the compatibility matrix to ensure you aren’t going to hit any snags down the line.With Nutanix this is a pretty straightforward process and can be found here:

Foundation is a tool which allows you to deploy and configure a bare-metal environment from start-to-end. Foundation software can be found at: . Nutanix recommendation is to always use the VM image so I will be using the “Standalone Foundation VM image for importing into VirtualBox or ESXi”

Download the AOS version. Nutanix release short-term-support (STS) and Long-term-supported (LTS) versions. I would normally select the latest LTS. If you are installing AHV, you can download it from the Nutanix portal, VMware portal if you are using ESXi

As I am using an unmanaged switch which does not support IPv6 I will be assigning the IP address to the IPMI ports manually.

Next is to install the Foundation VM image into either VMware Workstation on Virtualbox. Once the OVF has been deployed and powered on you can log in with the default username (nutanix) and password (nutanix/4u).

Once logged in assign an IP address to the virtual machine, in order to do this click on “set_foundation_ip_address” and click on “run in Terminal”

As the IPMI and CVM/host NIC’s will on separate VLANs I add a 2nd NIC to the foundation VM, and configured one NIC with an IP address on the IPMI VLAN and the other NIC with an IP on the CVM/Host VLAN. This is a very important step, IPMI and the CVM/host’s need to communicate during the foundation phase or the installation will fail.

Once all IP’s have been configured, connect the IPMI ports and one additional ethernet port from each node to the unmanaged switch. Don’t forget to connect the laptop you are using to the unmanaged switch. Ping each of the IPMI ports to ensure connectivity. If you are unable to connect to the IPMI ports you will need to troubleshoot this before continuing to the next step.

Once you are able to connect to all IPMI ports you can launch the Nutanix installer, which can either be done on the installation VM or locally on the laptop. http://installationVMIPaddress:8000/gui/index.html

The installer in broken down into 6 sections:

  • Start
  • Nodes
  • Cluster
  • AOS
  • Hypervisor
  • IPMI

As we are using an unmanaged switch we couldn’t add the separate VLAN’s for OOB management and CVM/Host, therefore on the “Start” screen under point 6 “To assign a VLAN to host/CVM’s enter a tag” you add “0”. Fill in the rest of the details Do not skip the validation at the end of the page as this would highlight any potential issues.

Once all validation is complete go to the next page “nodes” as you are using an unmanaged switch you will need to add the nodes manually.

Add the number of blocks and nodes for your environment and select the section that you have configured the IPMI ports.

Complete all the IP addresses for the IPMI, CVM and hosts, adding all hostnames

On the cluster screen fill in all the required details including:

  • Cluster name
  • Time Zone
  • Redundancy factor
  • Cluster Virtual IP
  • NTP servers (Nutanix recommend adding five)
  • DNS
  • CVM vRam

Next is to upload the AOS version which you downloaded previously, if you have launched the installer wizard on the installation VM, you will need to copy the AOS software to the installer, otherwise you can browse to the AOS file and upload it.

Follow the same steps with the Hypervisor, selecting the hypervisor and uploading the ISO.

Lastly fill in the IPMI credentials, in order to do this, click on tools and select the hardware vendor, this will automatically fill in the username and password.

Once everything has been filled in click start for the cluster creation. One the installtion process has started you can click on the logs for each node/host to see the progress.

If the deployment gets stuck after a few mins with the message “INFO waiting for remote node to boot into phoenix, this may take 20 mins” then you most likely have a routing issues between the IPMI and CVM/Host network.

Once all nodes have been configured the cluster will automatically be created as the final installation step.

If you are moving the nodes onto the customers network then stop the cluster services, configure all relevant VLAN’s on each host and CVM. Once all the nodes have been connected to the network, ensure you can connect to all the hosts and CVM’s, before starting the cluster again.

Nutanix has made deploying a cluster pretty stress free, provided you have checked the compatibility matrix and the networking is correct.