Azure Firewall Routing

I recently had a customer who had a small Azure environment and was looking to add an Azure Firewall with the following requirements:

  • All traffic between their on-premise environment and Azure via an Azure firewall.
  • All VM to internet traffic should utilise the firewall’s public IP address.

As you may be aware all VM’s in Azure are connected to the internet via the Azure backbone. VM’s do not require a public IP address to connect to the internet. This obviously raises some major security concerns. There are a couple of ways around this, either by using forced tunnelling which lets you redirect all internet traffic back to your on-premise network either via S2S VPN or ExpressRoute for inspection with your on-premise firewall. The other option is to add a firewall in Azure. This allows you to direct all internet traffic via the firewall in Azure, and also allow you to direct all traffic between Azure and on-premise and vice versa via the Azure firewall.

Below is a quick overview of the design. Red is the flow from VM to the internet, Blue is the flow between on-premise and Azure.

In the above example we have a Hub VNet on 10.0.0.0/16 with a S2S VPN and an Azure firewall on 10.0.3.4, and a subnet on 10.0.2.0/24 and a single VM on 10.0.2.4. The spoke Vnet is peered to the hub Vnet with gateway transit enabled. The spoke VNet contains one subnet 10.1.1.0/24 with a single VM on 10.1.1.4.

Next step is to ensure all internet traffic is via the firewall and not directly over the Azure backbone. A quick way to do this is to find which IP address the VM’s are using to access the internet. Log into each VM and open your favourite browser and browse https://www.whatismyip.com/ . Both Vm’s will most likely have the same IP address.

In order to route all internet traffic to the firewall you will need to create a default route to the internet.

First for the hub VNet:

  • Route Table: hubtofirewall
  • Route Name: hubtointernet
  • Address Prefix: 0.0.0.0/0
  • Next hop type: Virtual Appliance
  • Next Hop Address: 10.0.3.4
  • Associated subnet: 10.0.2.0/24

Next the spoke VNet:

  • Route Table: spoketofirewall
  • Route Name: spoketointernet
  • Address Prefix: 0.0.0.0/0
  • Next hop type: Virtual Appliance
  • Next Hop Address: 10.0.3.4
  • Associated subnet: 10.1.1.0/24

To ensure the traffic to the internet is now traversing the firewall you can log back into the VM’s, browse to https://www.whatismyip.com/, you will notice the IP has changed to the firewalls public IP address.

Next step is to route all traffic between Azure and on-premise via the firewall, this involves route traffic destined to the on-premise network to the firewall. In the first route table named hubtofirewall add the following route:

  • Route Name: hubtoonprem
  • Address Prefix: 192.168.1.0/24
  • Next hop type: Virtual Appliance
  • Next Hop Address: 10.0.3.4

Add the same details to the 2nd Route table, changing the name:

  • Route Name: spoketoonprem
  • Address Prefix: 192.168.1.0/24
  • Next hop type: Virtual Appliance
  • Next Hop Address: 10.0.3.4

These routes can be confirmed by viewing the effective routes:

Hubtofirewall

Spoketofirewall

Now that you all traffic from Azure is routed to the firewall the last step is to route the traffic from the gateway to the firewall,

  • Route Table: gateway
  • Route Name: gatewaytohub
  • Address Prefix: 10.0.0.0/16
  • Next hop type: Virtual Appliance
  • Next Hop Address: 10.0.3.4
  • Associated subnet: GatewaySubnet

Finally add the last route to the same Route Table:

  • Route Name: gatewaytospoke
  • Address Prefix: 10.1.0.0/16
  • Next hop type: Virtual Appliance
  • Next Hop Address: 10.0.3.4

Provided the firewall is configured to allow traffic in both directions, all traffic should now traverse the firewall.

Job done 🙂

Nutanix Cluster Deployment

I’ve deployed quite a few Nutanix solutions over the last few years. Each deployment is slightly different, whether its been installing it with AVH, ESXi, installing directly with customer switches or using an unmanaged switch for the initial deployment.

I thought I would go through the installation process using an unmanaged switch.

Before deployment check the compatibility matrix to ensure you aren’t going to hit any snags down the line.With Nutanix this is a pretty straightforward process and can be found here: https://portal.nutanix.com/page/documents/compatibility-matrix/hardware

Foundation is a tool which allows you to deploy and configure a bare-metal environment from start-to-end. Foundation software can be found at: https://portal.nutanix.com/#/page/Foundation . Nutanix recommendation is to always use the VM image so I will be using the “Standalone Foundation VM image for importing into VirtualBox or ESXi”

Download the AOS version. Nutanix release short-term-support (STS) and Long-term-supported (LTS) versions. I would normally select the latest LTS. If you are installing AHV, you can download it from the Nutanix portal, VMware portal if you are using ESXi

As I am using an unmanaged switch which does not support IPv6 I will be assigning the IP address to the IPMI ports manually. https://portal.nutanix.com/page/documents/details/?targetId=Field-Installation-Guide-v4-0:v40-node-set-ipmi-address-t.html

Next is to install the Foundation VM image into either VMware Workstation on Virtualbox. Once the OVF has been deployed and powered on you can log in with the default username (nutanix) and password (nutanix/4u).

Once logged in assign an IP address to the virtual machine, in order to do this click on “set_foundation_ip_address” and click on “run in Terminal”

As the IPMI and CVM/host NIC’s will on separate VLANs I add a 2nd NIC to the foundation VM, and configured one NIC with an IP address on the IPMI VLAN and the other NIC with an IP on the CVM/Host VLAN. This is a very important step, IPMI and the CVM/host’s need to communicate during the foundation phase or the installation will fail.

Once all IP’s have been configured, connect the IPMI ports and one additional ethernet port from each node to the unmanaged switch. Don’t forget to connect the laptop you are using to the unmanaged switch. Ping each of the IPMI ports to ensure connectivity. If you are unable to connect to the IPMI ports you will need to troubleshoot this before continuing to the next step.

Once you are able to connect to all IPMI ports you can launch the Nutanix installer, which can either be done on the installation VM or locally on the laptop. http://installationVMIPaddress:8000/gui/index.html

The installer in broken down into 6 sections:

  • Start
  • Nodes
  • Cluster
  • AOS
  • Hypervisor
  • IPMI

As we are using an unmanaged switch we couldn’t add the separate VLAN’s for OOB management and CVM/Host, therefore on the “Start” screen under point 6 “To assign a VLAN to host/CVM’s enter a tag” you add “0”. Fill in the rest of the details Do not skip the validation at the end of the page as this would highlight any potential issues.

Once all validation is complete go to the next page “nodes” as you are using an unmanaged switch you will need to add the nodes manually.

Add the number of blocks and nodes for your environment and select the section that you have configured the IPMI ports.

Complete all the IP addresses for the IPMI, CVM and hosts, adding all hostnames

On the cluster screen fill in all the required details including:

  • Cluster name
  • Time Zone
  • Redundancy factor
  • Cluster Virtual IP
  • NTP servers (Nutanix recommend adding five)
  • DNS
  • CVM vRam

Next is to upload the AOS version which you downloaded previously, if you have launched the installer wizard on the installation VM, you will need to copy the AOS software to the installer, otherwise you can browse to the AOS file and upload it.

Follow the same steps with the Hypervisor, selecting the hypervisor and uploading the ISO.

Lastly fill in the IPMI credentials, in order to do this, click on tools and select the hardware vendor, this will automatically fill in the username and password.

Once everything has been filled in click start for the cluster creation. One the installtion process has started you can click on the logs for each node/host to see the progress.

If the deployment gets stuck after a few mins with the message “INFO waiting for remote node to boot into phoenix, this may take 20 mins” then you most likely have a routing issues between the IPMI and CVM/Host network.

Once all nodes have been configured the cluster will automatically be created as the final installation step.

If you are moving the nodes onto the customers network then stop the cluster services, configure all relevant VLAN’s on each host and CVM. Once all the nodes have been connected to the network, ensure you can connect to all the hosts and CVM’s, before starting the cluster again.

Nutanix has made deploying a cluster pretty stress free, provided you have checked the compatibility matrix and the networking is correct.