NSX-T N-VDS Topologies and Migrating Host Networking between vSphere DVS & N-VDS


NSX-T deploys its own host networking switch on the ESXi /KVM Transport hosts called N-VDS – the Next Generation Distributed Switch. The difference from the vSphere DvSwitch is that N-VDS spans across multiple hypervisors (ESXi, KVM etc) that are a part of the Transport zone (and configured as a Transport node) and that each transport node can have it’s own teaming policies. N-VDS is the data plane component of the NSX-T architecture and is completely decoupled from vCenter Server. ESXi hosts sees an N-VDS networking as an Opaque network – meaning it can only see the networks but can’t manage it. NSX-T overlay Transport uses only the N-VDS and it’s up to us to decide how how the overlay and underlay networking of the SDDC would need to be. In this post we will look at the two different approaches (topologies) to host networking in an NSX-T SDDC and how to migrate the host networking between vSphere DvSwitch & NSX-T N-VDS. Note that by migrating overlay and underlay networking to N-VDS, you are infact decoupling networking from vCenter Server which will be fully managed by NSX-T manager.

Designing the host networking depends upon how the NSX-T SDDC architecture looks like. Some designs are like:

  • Dedicated Management, Compute and Edge ESXi clusters (Large scale)
  • Collapsed Management and Edge ESXi Clusters and dedicated Compute ESXi clusters (Medium scale)
  • Dedicated Management ESXi cluster and collapsed Compute and Edge ESXi clusters (Medium scale)
  • Completely collapsed Management, Compute and Edge ESXi clusters (Small scale)

In most cases, the management workloads would sit on the VLAN segments and it is less likely that we need to prepare the management hosts for NSX-T. Some exceptions to this are when you require an NSX loadbalancer to loadbalance management components, for example, vRealize Suite. The Overlay Workloads (as-a Service Workloads) sits on the Compute ESXi cluster on the NSX-T Overlay networking. All the East-West routing and firewalling for the overlay network happens at the Compute ESXi cluster and the North-South traffic for the Overlay private cloud is routed to the Edge ESXi cluster (hosting NSX-T Edges) to reach the external networks. When you prepare the Compute Cluster ESXi hosts for NSX-T, you will have 2 vSwitches on the host – vSphere VSS / DVS and NSX-T N-VDS. We have two approaches (topologies) to manage the host networking as below:

N-VDS Topologies

Disaggregated Topology

Here all the Infrastructure VLANs (Underlay) sits on the vSphere VSS/DVS . N-VDS will host only the Overlay networking as shown below: [Click for HQ Image]

 

NVDS1

This approach is good if we have atleast 4 physical NICs on the ESXi hosts – 2 for the vSphere VSS/DVS and 2 for the N-VDS (considering resiliency). As a good practice ,make sure that the uplinks from each vswitch goes to different physical switches ( Primary and Secondary in a VLT / vPC ToR) to handle ToR switch failures as well.

Aggregated Topology

Here we migrate all the Infrastructure VLANs (Underlay) from the vSphere VSS/DVS to N-VDS and use N-VDS both for Underlay and Overlay networking as shown below: [Click for HQ Image]

 

NVDS2

This approach can be used if we have only 2 physical NICs on the ESXi host. As said before, make sure that the uplinks from the N-VDS goes to different physical switches ( Primary and Secondary in a VLT / vPC ToR) for redundancy.

Migrating vSphere host networking to N-VDS

Lets see how we can migrate the vSphere DVS host networking to N-VDS. To illustrate this, I use a 4 node collapsed cluster where all the ESXi hosts are prepared as Transport Nodes. Each ESXi has 2 x 25G network cards installed and are attached as uplinks to the vSphere DvSwitch – vmnic0 and vmnic1

Creating the Transport Zones

I have created two transport zones – an Overlay Transport zone for the Overlay networking and a VLAN Transport zone for the underlay networking. Both the Transport zones use the same N-VDS named ‘NVDSCompute01’.

[Click here for HQ Image]

1

Disassociating a physical NIC from host DvSwitch

N-VDS requires a physical uplink to connect to external networks. So we need to disassociate vmnic0 from DvSwitch before configuring the ESXi hosts as NSX-T Transport nodes.

4

Creating the Uplink Profile for Transport nodes

The Uplink Profile defines the NIC Teaming policies and Transport VLAN for the Transport nodes. I used Explicit failover order for this article. The Uplink names under the Teaming section is just a placeholder for the NIC Cards, we will do the mapping of this placeholder with the physical NIC while configuring the hosts as Transport nodes.

5

Configuring the ESXi hosts as Transport Nodes

The ESXi hosts are now prepared as Transport Nodes for both of the Transport zones that we created earlier. We map the physical NIC vmnic0 to uplink-1 here.

6

2

[Click here for HQ Image]

NSX-T Manager will configure the hosts as TEPs on Transport VLAN 40. Once the hosts are prepared, we will see the N-VDS configured on all the ESXi hosts.

7

 

Creating the VLAN Logical Segments

We now create the infrastructure VLAN Logical Segments on N-VDS. This is similar to VLAN Port Groups on the vSphere DvSwitch.

We need 4 VLAN Logical segments as below and they should be on the VLAN Transport zone that we created above (TZ_VLAN_Infrastructure).

  • VLAN 10 -> Management
  • VLAN 20 -> vSAN
  • VLAN 30 -> vMotion
  • VLAN 50 -> VMNetwork

Plus any other VLANs that you have for the underlay. Note that we don’t need to create the Transport VLAN for Geneve encapsulation. It is managed by the NSX-T Uplink Profile.

8

[Click here for HQ Image]

9

[Click here for HQ Image]

Once the VLAN logical segments are created, you should now see them attached to the N-VDS from the vCenter Server view.

10

Migrating the vmkernel ports from DvSwitch PortGroup to N-VDS Logical Switch

NSX-T has an UI wizard to help with the migration of vmk ports and physical adapters to N-VDS.

11

[Click here for HQ Image]

12

Make sure that the mapping of vmkernel ports to the Logical switches are correct – else it will cut off the networking.

Now verify that the migration has succeeded from the vCenter Server view.

13

Migrating Virtual Machine networking from Port Groups to Logical Switches

This can be done under the ‘Edit Settings’ of Virtual Machine properties.

16

Now that the vSphere DvSwitch is free, you can disassociate the ESXi hosts from the vSphere DvSwitch or leave it as it is just in case you want to migrate the networking back to DvSwitch.

Migrating Host networking back from N-VDS to vSphere DvSwitch

Disassociate one physical NIC from the N-VDS Uplinks (vmnic1)

14

Associate this free NIC as uplink to the vSphere DvSwitch

15

Migrate the vmkernel ports from the N-VDS Logical switch to DvSwitch Port Groups. You can do this migration either from the NSX-T manager of from vCenter Server.

This is where we do from NSX-T Manager.

20

For this post, I will do from the “Manage Host Networking” option of vCenter Server

17

18

19

Once done, verify the status from the Networking view:

21

I hope this post was informative. Thanks for reading.

NSX

 

One thought on “NSX-T N-VDS Topologies and Migrating Host Networking between vSphere DVS & N-VDS

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s