Preparing 2-pNIC Compute Clusters for NSX-T using Transport Node Profiles and Network Mappings


I recently got to work on few NSX-T projects including Enterprise PKS where the management and compute clusters used a 2-pNIC host networking. This was not based on VCF, so I needed to manually prepare all the 2-pnic ESXi Compute Clusters for NSX-T in a consistent and controlled way. These were some of the requirements for the Compute Clusters:

  • All Compute Clusters should have consistent network labels (pNICS and vmkernel ports)
  • All Compute Clusters should have consistent configuration (TEP VLAN, MTU etc)
  • Scaling out of Compute Clusters should automatically prepare them for NSX-T
  • Host networking migration from the vSphere DVS to NVDS with minimal effort in a consistent way.
  • Deterministic steering for the infrastructure VLAN traffic (vSAN, vMotion etc)
  • An option to revert to vSphere DVS networking with minimal effort.

First requirement was met using the vSAN Quick Start provisioning for the Compute Clusters. This created the DVS and vmkernel ports for vSAN & vMotion traffic in a consistent way. The remaining were achieved using NSX-T Transport Node Profiles, Network mappings and Named Teaming Policies.

A challenge with Transport Node Profiles

A Transport Node profile is applied to a Cluster. A Transport Node profile can have only One Uplink Profile. An Uplink Profile defines the TEP VLAN. So in effect, a Transport Node profile defines a single TEP VLAN. This is OK if all the hosts in the Compute Cluster across racks have L2 adjacency. But if there is no L2 stretchability across racks, and if a Compute Cluster spans across racks, I am unsure how this can be applied. 

One solution would be to configure the hosts separately as Transport nodes which is not a recommended approach for Clusters because:

    • It drifts from a consistent configuration using Profiles
    • The configuration is applied on per host basis, and not to the cluster. Future addition of hosts would again require manual configuration.
    • Cluster Configuration Consistency can’t be maintained

A Workaround in Enterprise PKS

Each Compute Cluster is mapped as an Availability Zone in Enterprise PKS. Each Availability Zone is infact mapped to a rack, so that means each Compute Cluster sits in a dedicated rack. PKS BOSH Director makes intelligent decisions to deploy VMs across the Availability Zones. So in this case, each cluster has it’s own Transport Node Profiles. There is still a challenge with the Transport node profile for the management Cluster though as it spans across racks.

Back to Topic

Let’s get started. In this article, we will walk through the steps to prepare a 2-pNIC Compute Cluster for NSX-T and migrate the host networking (vmkernels and pNICs) to NVDS using Transport Node profiles and Network mappings. It is assumed that the Compute Clusters have L2 adjacency.

Current State

We have a Shared Management & Edge Cluster and 3 Compute Clusters which are mapped as Compute Availability Zones in PKS. 

  • Shared Management and Edge Cluster is not prepared for NSX-T. It uses vSphere DVS for host networking. This cluster hosts the NSX-T managers, Edges, vCenter, Enterprise PKS Management Console and other management bits.
  • Two of the Compute clusters are already prepared for NSX-T. We will deal with the third Compute Cluster named “Availability_Zone_03_GPU”

1

  • All the 3 Compute hosts in the cluster uses vSphere DVS “DSwitch_AZ03” with 2 pNICs – vmnic0 & vmnic1. The vmnic labels are consistent across all the hosts in this cluster. This is needed for the Transport Node Profile to work.

2

  • This is the VMKernel assignment. The vmk labels are also consistent across all the hosts in this cluster. This is needed for the Network Mappings to work.
    • vmk0 – Management
    • vmk1 – vMotion
    • vmk2 – vSAN

If there is an inconsistency between the vmnic labels or the vmk labels on the hosts, then the Transport node configuration using Transport Node profile fails.

3

4

5

  • vCenter is already added as a Compute Manager. The other 2 Compute Clusters are already Configured for NSX-T.

6

Transport Zones

The Compute Hosts are a part of two Transport Zones – Overlay and VLAN both leveraging the same NVDS (as highlighted below)

10.png

Host Uplink Profiles

We define Named Teaming Policies to achieve deterministic Traffic Steering for the Underlay Infrastructure VLAN traffic. 

  • Management – Failover method with uplink1 (Active) & uplink2 (Standby)
  • vSAN – Failover method with uplink2 (Active) & uplink1 (Standby)
  • vMotion – Failover method with uplink1 (Active) & uplink2 (Standby)

We also set the TEP VLAN and MTU to 9000

7.png

We will link the Named Teaming Policies to the the Host VLAN Transport Zone, so it become available to the hosts.

11.png

We will now update the VLAN Logical Segments to use the Named teaming Policy.

141213

Transport Node Profile

The Compute hosts are a part of the below highlighted Transport Zones – one Overlay and One VLAN leveraging the same NVDS.

20

We will define mapping for the pNICs here. Each pNIC maps to a logical construct on the NVDS. Since the pNICs are already bound to the vmk interfaces on the vSphere DVS networking, make sure to uncheck “pNIC only migration”. This option is used only if there are no vmk interfaces associated with the pNIC to be migrated to NVDS. 

21

In order to migrate the vmkernel ports, we have to define mappings under “Network Mappings for Install”. This is a 1:1 binding between a vmkernel portgroup  to NVDS Logical Segment. As a note, confirm that the VLAN Logical Segments have the same VLAN identifier as the vmkernel portgroup to be mapped.

22

If for some reasons, we would need to unprepare the cluster from NSX-T and revert to the vSphere DVS networking, we have to define the mappings under “Network Mappings for Uninstall”. This is because NSX-T manager doesn’t maintain any network mapping tables. In our case we don’t need it, but this is how we have to define it. 

2324

The DVS port groups have to be changed to ephemeral port bindings for this to happen.

33

 

Configuring the Compute hosts as NSX-T Transport Nodes

Now let’s configure the Compute Cluster “Availability_Zone_03_GPU” as NSX-T Transport nodes.

15

We select the Transport Node profile defined earlier.

16

17

The hosts preparation should now succeed.

18

We now have the 3 Compute Clusters prepared for NSX-T. Each cluster will map to distinct Availability Zones in PKS

18.png

Post-Migration checks

  • Confirm that the migration of pNICs and vmk ports succeeded and that the MTU is set correctly.

19.png

  • Verify that the Named Teaming Policies are applied correctly

22

24

  • Test a ping with Large MTU Size

25

  • Verify the TEP interfaces and MTU (9000)

23

Cleanup Tasks (Optional)

This is optional. If we are planning to revert the networking to vSphere DVS using Transport Node Profiles, then ignore this step.

  • Disassociate the  vSphere DVS from the hosts

20

  • Delete the vSphere DVS

21

We are done now. I hope the post was informative.

Thanks for reading

cropped-sketch-1565367997315

 

 

 

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s