NSX-T Single NVDS Multi-TEP Edge VM Deployment & Configuration on vSphere DVS

NSX-T introduced a Single NVDS Multi-TEP design for the Edge nodes in version 2.4, but the recommendation was to use from version 2.5. Now that version 2.5 got it’s GA few days back, we will walk through the deployment and configuration of a Single NVDS Multi-TEP Edge VM cluster on Transport host vSphere DVS.

The earlier Edge VM design (prior to version 2.5) was a multi-NVDS Single-TEP which used a recommended minimum of 3 NVDS ( one for Overlay Transport TEP and two for Northbound uplinks). The new architecture brings in a lot of design and configuration simplicity. To see how the previous Edge VM design looked like, please visit my earlier posts below:

Edges on vSphere DVS -> https://vxplanet.com/2019/05/23/deploying-the-nsx-t-edge-vm-cluster-leveraging-vsphere-dvs-portgroups/

Edges on host NVDS Networking -> https://vxplanet.com/2019/07/08/deploying-and-configuring-nsx-t-edges-on-n-vds-networking/

Let’s get started:

Single NVDS Multi-TEP Edge VM Architecture

[Click here for HQ Image]

MultiTEP_EdgeVM_DVS 2

Environment Details

We will deploy the Edge VMs on a shared Management and Edge Cluster with a single vSphere DVS spanning over 4 ESXi hosts

  • A shared Management and Edge cluster with 4 X Dell EMC Poweredge R640 servers as ESXi hosts. They are not NSX-T Transport nodes (They don’t have NSX-T vibs installed and so no NVDS)
  • A dedicated Compute Cluster with 3 X Dell EMC Poweredge R640 servers as ESXi hosts. They are configured for NSX-T and have N-VDS deployed.
  • 2 X 10G host networking connected to Dell EMC Networking S4048-ON switches in VLT (No LACP)
  • Single vSphere DVS
  • NSX-T 2.5 with a 3 node management cluster.

As a Tip, before deploying NSX-T, make sure to check the VMWare Interoperability matrix to ensure that the version of vSphere ESXi is compatible. For NSX-T 2.5, check the below link:

https://www.vmware.com/resources/compatibility/sim/interop_matrix.php#interop&175=3394&1=3456,3221,2861

30

NSX-T Edge VMs can be deployed from the NSX-T manager UI or via an OVA file. In this article, we will deploy Edges from the NSX-T manager UI. The Edge VMs are deployed with 4 vNICs. This is how we use the vNIC binding for a single NVDS Multi-TEP design.

  • Management (named as eth0) – Connected to the DVS Management PG on VLAN 10
  • FastPath interfaces (fp-eth0 & fp-eth1) – They are uplinks to the Edge NVDS and attach northbound to separate Trunk Port Groups on vSphere DVS. The Trunk PG are tagged on VLANs 10,40,60 & 70.
  • The last vnic is disconnected in this case.

This is the VLAN details:

VLAN 10 – Management

VLAN 40 – Overlay Transport (TEP)

VLAN 60 – VLAN Uplink for the Edges (T0) to eBGP peer with the Leaf Switches

VLAN 70 – VLAN Uplink for the Edges (T0) to eBGP peer with the Leaf Switches

Configuring the vSphere DVS Port Groups

We would need to create two Trunk Port Groups on the vSphere DVS to allow multiple VLAN tags from the Edge NVDS to pass though. As mentioned in the VLAN details above, this trunk PG should allow tags 40, 60 & 70. The Trunk Port Groups will have the below teaming policy set on the ESXi hosts:

TrunkPG1 – Active Uplink: vmnic0 and Standby Uplink: vmnic1

TrunkPG2 – Active Uplink: vmnic1 and Standby Uplink: vmnic0

1

First Trunk Port Group

23

Second Trunk Port Group

45

 

Transport Zones

We will create two Transport Zones – One Overlay and a VLAN both leveraging the same NVDS (named NVDSStaging01). Depending on the design, the host Transport nodes can be a part of either Overlay Transport zone only or both.

11

12

Edge Uplink Profile

In the earlier multi-NVDS designs, the TEP VLAN tagging was applied by the DVS Port Group. Here, since we attach the Edge VM Fast Path interfaces to Trunk Port Groups, we need to apply the tagging at the Edge NVDS level. Also, we need to set the teaming policy to Load Balancing with two active uplinks.

13

Deploying and Configuring the first Edge VM

Make sure that we have added vCenter Server as a Compute manager.

40

Let’s deploy the first Edge VM from the NSX-T Manager UI.

6

We will deploy a Medium sized node.

7

8

We will deploy on the TC_Staging Cluster and on the vSAN Datastore.

9

We will use a static Management network.

10

Now let’s provide the details to configure the Edge VM as a Transport node.

Edge VM should be a part of both the Overlay and VLAN Transport zone that we created earlier. Note that we will have only a single NVDS for the Edge VM as both Transport zones leverages the same NVDS.

Select the Multi-TEP uplink profile that we created previously.

14

If we don’t have a TEP IP Pool, then create one from this UI.

15

Make the FastPath interface bindings. The fp interfaces attach to the vSphere Trunk Port Groups that we created previously.

16

Clicking on Finish will deploy the Edge VM on vShere cluster and configures it as a Transport node.

17

Once deployed, verify that the Manager/Controller connectivity and NIC status is up.

18

Check the Edge VM settings in vCenter and confirm that the 4th vnic is disconnected.

19

Deploying and Configuring the second Edge VM

The process is similar to above except that the management IP is different. Make sure that we deploy the second edge VM only after the successful completion of the first one.

Verifying the Edge TEP

Let’s see whether Edge VMs have two TEP IPs assigned and that they are reachable.

20

21

SUCCESS!!!

Creating the Edge Cluster

Now we will add both Edge VMs to the Edge Cluster.

22

Verify that the BFD tunnel between the Edge nodes are up.

23

Creating the Tier0 Uplink VLAN Logical Segments

We need to create two Uplink VLAN Logical Segments for the Tier 0 Gateway Uplinks. Note that this should be created on the same VLAN Transport Zone on which the Edge VMs are configured.

The VLAN Logical Segment should have the respective VLAN Identifier (60 or 70) as the tagging is applied by this Logical Segment.

[Click here for HQ Image]

24

Deploying a Tier0 Gateway and adding Uplinks

We will deploy a Tier0 Gateway in Active-Active mode. There are 2 SR Constructs with two uplinks each, so a total of 4 Uplinks for the T0 Gateway

[Click here for HQ Image]

25

Here are the T0 Uplink details:

  • Uplink1_V60 – on VLAN 60 via Edge node 1
  • Uplink2_V70 – on VLAN 70 via Edge node 1
  • Uplink3_V60 – on VLAN 60 via Edge node 2
  • Uplink4_V70 – on VLAN 70 via Edge node 2

[Click here for HQ Image]

26

Let’s connect to the SR Constructs on each Edge node and verify northbound reachability.

27

Success!!! Edge node 1 can reach the Default Gateway via both VLANs 60 & 70.

28

Success!!! Edge node 2 can reach the Default Gateway via both VLANs 60 & 70.

I’ve now deployed a Tier1 Gateway and attached a VM on ESXi host 1 to an Overlay LS on the Tier1 Gateway. The T0 DR Construct should be now available on all the transport nodes and we should see a geneve tunnel established from this Transport host to both the Edges for this VNI.

[Click here for HQ Image]

200

I have also completed the BGP configuration on the T0 Gateway and advertised the necessary networks. Let’s do a connectivity test from the VM to the Default Gateway on the external Leaf switches.

300

Success!!! This validates the deployment. I hope this article was informative.

Thanks for reading.

 

cropped-sketch-1565367997315

 

 

3 thoughts on “NSX-T Single NVDS Multi-TEP Edge VM Deployment & Configuration on vSphere DVS

  1. Hi, the vtep vlan 40 you are using can that or should that be the same vtep vlan for ESXi host transport nodes as well? In other words, can I use one vtep pool for ESXi nodes and edge nodes? Thanks.

    1. Hi Kent, If the Edges are attached to vSphere DVS (as in a Shared management and Edge cluster or a dedicated Edge cluster) it’s possible to have a single TEP VLAN for both Compute hosts and Edges.
      But if, Edges are attached to Host NVDS (as in a 2-pNIC shared Compute and Edge cluster), we SHOULD need separate TEP VLANs for Edges and Hosts. This is because TEP traffic should always reach a host through it’s uplinks for decapsulation. If Edges are on Host NVDS and on same VLAN it is possible that TEP traffic enters host NVDS over the internal Port group which is considered as a malicious traffic and is dropped by the host NVDS. Having separate VLAN always forces TEP traffic to get routed by the ToR and enter host NVDS via it’s uplinks.

      Thanks

Leave a Reply