Multi NSX support for Compute Managers

Until NSX-T 3.2.2, a compute manager (vCenter) can be managed by only a single instance of NSX-T (or NSX) manager. This means that all clusters in the compute manager will be prepared by the same NSX instance and if data plane isolation is required between clusters, it is achieved using separate NSX transport zones and edge clusters. However, there are some scenarios where customers have multiple environments like Prod / non-Prod etc. mapped to different vSphere clusters in the same vCenter server and require different NSX manager instances to manage these compute clusters. For example, a Prod NSX manager managing the Prod vSphere cluster, a non-Prod NSX manager managing the non-prod vSphere cluster and so on, to achieve a complete management plane and data plane isolation for the networking stack. Prior to NSX-T 3.2.2, this was something hard to achieve and required breaking the clusters to multiple vCenter servers which was practically not a feasible approach.

Starting with NSX-T 3.2.2, we can register the same vCenter Server (compute manager) with multiple NSX Manager instances using a feature called multi-NSX. This way, a Prod NSX manager can manage the Prod vSphere cluster and a non-Prod NSX manager can manage the non-prod vSphere cluster connected to the same vCenter server. With the multi-NSX feature, there is a change in the NSX extension key registered in vCenter, basically the extension key changes from old “com.vmware.nsx.management.nsxt” to a custom extension key “com.vmware.nsx.management.nsxt.<computemanager-id>”, where the <computemanager-id> is ID of vCenter compute manager registered in NSX. This allows multiple NSX managers to register the same vCenter server as compute manager.

In this article, we will walk through the configuration of multi-NSX feature and discuss the changes in NSX extension keys registered in vCenter, NSX ownership of managed objects, considerations to place edge clusters and migrating hosts across clusters managed by different NSX managers.

Let’s get started.

Before Multi NSX feature

As discussed above, NSX versions prior to 3.2.2 doesn’t allow a vCenter server to be registered with more than one NSX-T instance as compute manager. Let’s check this and review the extension key that is registered.

We have a vCenter server “VxDC01-vCenter01” successfully registered as a compute manager in NSX “VxDC01-NSXMGR01”

Let’s connect to the vCenter server managed object browser (mob) and check the NSX extension that is registered at https://vxdc01-vcenter01.vxplanet.int/mob/?moid=ExtensionManager

We see the old extension key “com.vmware.nsx.management.nsxt” with information about the compute manager ID and the NSX Manager URL to which the vCenter server is registered.

Now, let’s try to add the same vCenter server “VxDC01-vCenter01” as compute manager to another NSX instance “VxDC01-NSXMGR02”. The registration fails as it is unable to register the extension with vCenter.

Configuring Multi NSX feature

Below is the homelab topology where we have a vCenter server “VxDC01-vCenter01” managing two compute clusters “VxDC01-C01-Prod” and “VxDC01-C02-NonProd”. Each compute cluster will be managed by separate NSX instances.

  • Cluster “VxDC01-C01-Prod” will be managed by Prod NSX “VxDC01-NSXMGR01”
  • Cluster “VxDC01-C01-NonProd” will be managed by Non-Prod NSX “VxDC01-NSXMGR02”

Each compute cluster will be on separate vCenter VDS. This is a requirement for multi-NSX as a VDS on one compute cluster prepared by one NSX manager will be owned (locked) by that specific NSX manager instance.

Let’s add “VxDC01-vCenter01” as compute manager to Prod NSX “VxDC01-NSXMGR01” with the toggle button for Multi-NSX to ON and wait for the registration to succeed.

Let’s connect to the vCenter server managed object browser (mob) and check the NSX extension that is registered at https://vxdc01-vcenter01.vxplanet.int/mob/?moid=ExtensionManager

We see that the NSX extension is now registered with a new custom extension key “com.vmware.nsx.management.nsxt.<computemanager-id>”, where the <computemanager-id> is ID of vCenter compute manager registered in Prod NSX “VxDC01-NSXMGR01”

Let’s add “VxDC01-vCenter01” as compute manager to the Non-Prod NSX “VxDC01-NSXMGR02” with the toggle button for Multi-NSX to ON and wait for the registration to succeed.

We see that “VxDC01-NSXMGR02” also registered the NSX extension with the custom extension key “com.vmware.nsx.management.nsxt.<computemanager-id>”, where the <computemanager-id> is ID of vCenter compute manager registered in Non-Prod NSX “VxDC01-NSXMGR02”

Note : In multi-NSX Mode, all NSX Managers registered to the same vCenter server must have Multi NSX flag enabled. It’s not supported to have Multi NSX enabled for Prod NSX “VxDC01-NSXMGR01” and Multi NSX disabled for Non-Prod NSX “VxDC01-NSXMGR02” and vice-versa.

Preparing Prod compute clusters with Prod NSX Manager

The Prod compute cluster “VxDC01-C01-Prod” will be managed by the Prod NSX “VxDC01-NSXMGR01”. Hence this cluster needs to be prepared using the transport node profile from VxDC01-NSXMGR01.

For more details on using transport node profiles (TNPs) and sub-transport node profiles (sub-TNPs) to prepare clusters, please check out my previous article below:

The correct VDS needs to be specified in TNP as this will be locked by NSX Manager from configuration changes made by other NSX managers. For the Prod cluster, this will be “VxDC01-C01-VDS01”.

The cluster preparation should succeed in few minutes.

Preparing Non-Prod compute clusters with non-Prod NSX Manager

The Non-Prod compute cluster “VxDC01-C01-NonProd” will be managed by the Non-Prod NSX “VxDC01-NSXMGR02”. Like Prod cluster, we will use the transport node profile from VxDC01-NSXMGR02 to prepare the non-prod cluster.

We will select “VxDC01-C02-VDS01” as the VDS option in TNP

The cluster preparation should succeed in few minutes.

One thing we noticed is that the clusters prepared by one NSX manager appears as Read-Only from the other NSX manager’s console. This is because each NSX Manager owns the managed objects (clusters, hosts and VDS) and applies a configuration lock so that other NSX managers can’t modify the settings.

We will discuss more about this in the next section.

NSX Ownership of managed objects

Whenever a cluster is prepared by an NSX Manager, NSX manager adds a custom attribute to the vSphere cluster, hosts and VDS which indicates that these objects are managed by the specific NSX manager. The custom attribute has a value that points to the vCenter compute manager ID in NSX.

Let’s check this for the Prod NSX instance “VxDC01-NSXMGR01”.

Below is the compute manager ID of the vCenter instance registered in “VxDC01-NSXMGR01”.

We see that custom attribute with the compute manager ID as value is added to the vSphere cluster “VxDC01-C01-Prod” managed by “VxDC01-NSXMGR01”.

We see that the same custom attribute is added to hosts in the vSphere cluster “VxDC01-C01-Prod”.

Lastly, we see that the same custom attribute is added to the vSphere DVS of cluster “VxDC01-C01-Prod”.

Non-Prod NSX “VxDC01-NSXMGR02” will see a configuration lock on the Prod vSphere cluster “VxDC01-C01-Prod” that is managed by “VxDC01-NSXMGR01” and the respective objects are read-only from the view of “VxDC01-NSXMGR02”.

DFW considerations for placing Edge nodes

By default, edge nodes deployed by an NSX manager will be added to it’s DFW exclusion list. But edge nodes deployed by one NSX manager on a vSphere cluster prepared by another NSX manager will be seen as a regular VM by the other NSX manager and hence won’t be added to the DFW exclusion list of other NSX managers and needs to be added manually. As a best practice, it’s recommended to deploy edge nodes on the vSphere cluster managed by the same NSX manager and let the system manage the DFW exclusion list automatically.

We have four (4) edge nodes deployed by the Prod NSX manager “VxDC01-NSXMGR01” on the Prod vSphere cluster “VxDC01-C01-Prod”. The DFW exclusion list for edge nodes will be automatically created on the Prod NSX manager.

Instead if these edge nodes were deployed by the Prod NSX “VxDC01-NSXMGR01” on the Non-Prod cluster “VxDC01-C02-NonProd” managed by the non-Prod NSX “VxDC01-NSXMGR02”, the edge nodes wont appear in the DFW exclusion list of “VxDC01-NSXMGR02” and there is possibility of traffic drops due to DFW policies being enforced from “VxDC01-NSXMGR02”.

Hence these edge nodes need to be manually added to the DFW exclusion list on “VxDC01-NSXMGR02”

Moving hosts across clusters managed by different NSX managers

Now let’s look at a scenario where we want to migrate a host from the non-Prod vSphere cluster “VxDC01-C02-NonProd” (managed by “VxDC01-NSXMGR02”) to the Prod vSphere cluster “VxDC01-C01-Prod” (managed by “VxDC01-NSXMGR01”).

We will migrate the host “vxdc01-c02-esx03.vxplanet.int” to Prod cluster “VxDC01-C01-Prod”.

This host is currently managed by transport node profile (TNP) on Non-Prod NSX “VxDC01-NSXMGR02”. Taking this out of the non-prod cluster will uninstall the NSX vibs.

We also see that the custom attributes added by “VxDC01-NSXMGR02” is removed.

The target cluster “VxDC01-C01-Prod” is prepared with VDS “VxDC01-C01-VDS01”, hence the networking of this host needs to be migrated to the target cluster specific VDS.

As the host is added to the target cluster “VxDC01-C01-Prod”, it will be prepared by the transport node profile (TNP) on Prod NSX “VxDC01-NSXMGR01”. The custom attributes are also added accordingly.

References

Now let’s wrap up and meet on a different topic later. I hope this article was informative.

Thanks for reading.

2 thoughts on “Multi NSX support for Compute Managers

  1. Hey Hari,

    Great post as usual!

    Is it worth covering the caveat that if you enable “Muli-NSX” you can no longer do vLCM/WCP on that VC?

    We recently went through this ourselves and decided to go down the path of manual edge deployment (ova) rather than having multi-nsx because it prohibited us from using vLCM.

    Cheers,
    Kane.

Leave a Reply to HariKrishnanCancel reply