NSX 4.0.1 Stateful Active-Active Gateway – Part 4 – Edge Sub-Clusters and Failure Domains


Welcome to the final and Part 4 of the blog series on Stateful Active-Active gateways in NSX 4.0.1. In the previous parts we discussed about stateful active-active single tier & two-tier routing, new terminologies like edge sub-clusters, interface groups, shadow interfaces, peer-shadow interfaces, traffic punting and packet walks with different supported topologies. If you missed these parts, please check them out below:

Part 1: https://vxplanet.com/2023/01/24/nsx-4-0-1-stateful-active-active-gateway-part-1-single-tier-routing/

Part 2 : https://vxplanet.com/2023/01/30/nsx-4-0-1-stateful-active-active-gateway-part-2-two-tier-routing/

Part 3: https://vxplanet.com/2023/02/08/nsx-4-0-1-stateful-active-active-gateway-part-3-routing-considerations-and-packet-walks/

In this Part 4, we will take a look at edge failure domains and influencing the edge node selection for edge sub-clusters in stateful active-active gateways. With failure domains, we ensure that edge nodes selected for a sub cluster always belong to separate availability zones.

Let’s get started:

Edge sub-clusters and failure domains

The below sketch shows two availability zones (vSphere edge clusters) where the edge cluster hosting the stateful active-active gateways span across. Edge nodes 1 & 2 (vxdc01-c01-edge01 and vxdc01-c01-edge02) belong to AZ01 and they are deployed on vSphere edge cluster 01 local to AZ01. Edge nodes 3 & 4 (vxdc01-c03-edge03 and vxdc01-c01-edge04) belong to AZ02 and they are deployed on vSphere edge cluster 02 which is local to AZ02. Without edge failure domains in place, the edge node selection for edge sub-clusters in stateful active-active gateways is indeterministic. Having an edge sub-cluster with member edge nodes distributed across two availability zones (eg: vxdc01-c01-edge01 in AZ01 and vxdc01-c01-edge03 in AZ03) will offer an extra layer of resilience towards availability zone failures (like rack failure etc) which will be something we are going to accomplish shortly.

The below sketch shows the four edge nodes which are part of the edge cluster “vxdc01-c01-ec01”. We will map them to respective failure domains in the next section.

Creating edge failure domains

Creation of edge failure domains is currently done via NSX API. The below VMware documentation illustrates the procedure:

https://docs.vmware.com/en/VMware-NSX/4.0/administration/GUID-FABBBD3C-0928-4E7B-BC30-F04070A76517.html

I have a Github project which I did few months ago called “NSX Failure Domain Creator” to automate the creation & deletion of NSX failure domains and mapping / un-mapping of edge nodes to failure domains. Please check it out here:

https://github.com/harikrishnant/NsxFailureDomainCreator

Let’s use this to automate edge failure domains and related tasks in this blog post. The project also has the necessary documentation (README.md) to get started.

Let’s clone the Github repository and navigate to NsxFailureDomainCreator:

git clone https://github.com/harikrishnant/NsxFailureDomainCreator.git && cd NsxFailureDomainCreator

Next we will run the NSXFailureDomainCreator tool and follow the instructions in the menu screen.

python3 failure_domain_creator.py -i <NSX Manager IP/FQDN> -u <NSX_admin_user> -p <NSX_admin_password> 

Option 2 will create edge failure domains. We will create two failure domains VxDC01-AZ01 and VxDC01-AZ02

Option 1 will print the list of failure domains that we just created:

Next we will map edge nodes to the respective failure domains.

Mapping edge nodes to failure domains

Option 4 will print the list of edge nodes and their current failure domain mapping information

Option 5 will map edge nodes to failure domains. Let’s map vxdc01-c01-edge01 and vxdc01-c01-edge02 to the availability zone, VxDC01-AZ01

Next we will map vxdc01-c01-edge03 and vxdc01-c01-edge04 to the availability zone, VxDC01-AZ02

We will now print the mapping table using Option 4 and confirm the edge mapping to Availability zones.

Adding failure domain allocation policies

Option 7 will check if there is a failure domain allocation policy applied to the edge cluster

We will use Option 8 to set edge cluster allocation policy based on failure domains

Let’s confirm the allocation policy mapping using Option 7 again.

At this moment, we are done with edge failure domains. Now let’s logout from the tool using Option 10

Creating stateful active-active T0/T1 gateway

Let’s create a stateful active-active T0 gateway and confirm that the sub-clusters are created based on failure domains. We should see two edge sub-clusters for the stateful active-active gateway, one with vxdc01-c01-edge01 (AZ01) and vxdc01-c01-edge03 (AZ02)as members & the other with vxdc01-c01-edge02 (AZ01) and vxdc01-c01-edge04 (AZ02) as members.

Let’s also confirm the sub-cluster membership using the cli

Now let’s create a stateful active-active T1 gateway downstream to the stateful T0 gateway we just created, and verify the sub-cluster membership

We see that the edge sub-clusters for T1 gateway also had edge node membership based on failure domains.

Now that’s a wrap. I hope the blog series was informative and helpful.

Thanks for reading.

Continue reading? Here are the other parts of this series:

Part 1 : https://vxplanet.com/2023/01/24/nsx-4-0-1-stateful-active-active-gateway-part-1-single-tier-routing/

Part 2: https://vxplanet.com/2023/01/30/nsx-4-0-1-stateful-active-active-gateway-part-2-two-tier-routing/

Part 3: https://vxplanet.com/2023/02/08/nsx-4-0-1-stateful-active-active-gateway-part-3-routing-considerations-and-packet-walks/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s