NSX-T Single N-VDS Multi-TEP Baremetal Edges Deployment and Configuration

With the release of version 2.4, NSX-T now supports a Single N-VDS and Multi-TEP Baremetal Edge Configuration. With a Multi-TEP design, we could achieve redundancy when a TEP uplink fails instead of failing over to the Stand-by Edge node. This is the recommended configuration when we have two or more DPDK interfaces for the Baremetal Edges. The upper limit for the number of DPDK interfaces is currently 8 which also means a higher number of ECMP paths when the Edge cluster is deployed in Active-Active mode.

I have written a post on Multi-NVDS Single-TEP Baremetal Edge design earlier, in case you are interested it is available in the below link. Even though the below design works, but a multi-TEP design is currently recommended from version 2.4 onwards.

https://vxplanet.com/2019/05/31/nsx-t-2-4-baremetal-edge-cluster-deployment-and-configuration/

There is a separate Hardware Compatibility matrix for the Baremetal Edge hardware. Make sure that the hardware we are using is listed in the below URL.

https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.4/installation/GUID-14C3F618-AB8D-427E-AC88-F05D1A04DE40.html

https://certification.ubuntu.com/server/models/?release=18.04%20LTS&category=Server

If the hardware or any specific component (NICs for example) is not listed, it is most likely the deployment would fail or we would see unexpected issues.

Baremetal Edges doesn’t support UEFI BIOS, so make sure that the BIOS mode is set to Legacy.

In this post, we will cover the architecture, deployment, configuration and validation of Baremetal Edge cluster leveraging a Single-NVDS and Multi-TEP design.

Let’s get started.

Single-NVDS Multi-TEP Baremetal Edge Architecture 

[Click here for HQ Image]

BMEdgeMultiTEP

The baremetal Edges are the same DellEMC Poweredge R630 servers that I used in my earlier post with slight hardware changes to meet the compatibility. 

 https://vxplanet.com/2019/05/31/nsx-t-2-4-baremetal-edge-cluster-deployment-and-configuration/

It has 2 DPDK interfaces for the N-VDS and a dedicated management NIC. Since release 2.4, management can also be configured on fastpath interfaces eliminating the need for a dedicated NIC, but I haven’t seen a reference for that configuration so far. Let’s do that as a future task.

Edges are configured on two Transport zones – one Overlay and one VLAN both leveraging the same N-VDS. VLAN Tag for the Overlay TEP encapsulation is applied by the Edge Uplink Profile and for the Uplinks are applied by the VLAN logical segments. It is possible to have up to 8 Tier 0 Uplinks (8 ECMP paths) per Baremetal Edge node.

Here is the IP and VLAN information which is used.

VLAN 10 – Management – 192.168.10.0/24

VLAN 40 – TEP – 192.168.40.0/24

VLAN 60 – Uplink – 192.168.60.0/24

VLAN 70 – Uplink 192.168.70.0/24

Deploying the first Baremetal Edge Node

Set the BIOS mode to Legacy. UEFI mode is currently not supported.

100

Boot from the NSX-T Edge ISO image and select ‘Automated Install’.

1

This is a quick overview of the automated deployment workflow. 

23

Select the NIC to be used as the management interface. This should be the one connected to Untagged VLAN 10 switchport

4

By default, the installer looks for a DHCP server to assign IP details to the management interface. If it doesn’t find any, it prompts for manual input.

5

Provide the IP, Default gateway and DNS settings. Optionally it is good to have a host record pre-created in DNS for the Edge node.

The disks are partitioned, and the packages installed.

8

Once the installation is over, the node reboots and we are presented with the login screen.

11

Login with admin credentials (password : default). We will be prompted to reset the password and then the services initializes. We can expect one more reboot here.

13.png

Once the node is back up, perform basic checks like interface status , connectivity, name resolution etc.

15

17

Enable ssh for the Edge node.

18

Connect to the Edge node via ssh.

19

Change the hostname to bmedge01

20

Joining the Baremetal Edge to the NSX-T Management Plane

Get the Certificate Thumbprint from one of the NSX-T manager nodes.

21

Join the Edge node to the management plane

[Click here for HQ Image]

22

The Edge node will be now visible under ‘Edge Transport Nodes’ in NSX-T manager UI.

[Click here for HQ Image]

23

Configuring the Baremetal Edge node as a Transport node

We will configure the Edges on two Transport zones – Overlay TZ and VLAN TZ both leveraging the same N-VDS (called NVDSCompute01)

[Click here for HQ Image]

25

We will create a Custom Multi-TEP Edge Uplink profile with Teaming policy set to Loadbalance. TEP Encapsulation Transport VLAN is 40.

24

 

Now, we will configure the Edge as a Transport Node

2627

Here we associate the two N-VDS uplinks to DPDK interfaces of the Edge node.

 The configuration should now succeed.

[Click here for HQ Image]

28

See that the node status shows as ‘Degraded’. Nothing to worry here because the server that I used had 5 DPDK Interfaces and I used only 2 of them. We could utilize all of them (to a maximum of 8), but lets deploy with 2 for this article.

[Click here for HQ Image]

29

[Click here for HQ Image]

30

 

Deploying the Second Baremetal Edge Node

The process is exactly similar to the above except that the Management Interface IP address is different (192.168.10.164/24)

[Click here for HQ Image]

31b

Creating the Baremetal Edge Cluster

32

[Click here for HQ Image]

33

Verify the Geneve Tunnel Status between the Edge nodes in the cluster.

[Click here for HQ Image]

33b

 

Creating the VLAN Logical segments for the Edge Uplinks

We will create two VLAN Logical segments for the Edge Uplink connectivity. One on VLAN 60 and the other on VLAN 70. Note that they should be created on the same VLAN Transport zone on which the Edges are configured.

[Click here for HQ Image]

34

[Click here for HQ Image]

35

Deploying a Tier 0 Gateway and Post-validation

We will now deploy a Tier 0 Gateway with 4 uplinks (over both of the baremetal edge nodes) leveraging the VLAN Logical segments that we created before. 

[Click here for HQ Image]

36

[Click here for HQ Image]

38

Each Baremetal Edge will have two Uplinks each over the two VLANs – VLAN 60 and VLAN 70. We should be able to ping the Leaf switches from both the Edge nodes.

Login to one of the Edge node (bmedge01) via ssh and verify the interfaces and connectivity to Leaf switches.

[Click here for HQ Image]

42

43

44

SUCCESS!!! Edge node 1 (bmedge01) can now successfully ping the Leaf switches. Repeat the same from the other Edge node.

45

SUCCESS!!! Edge node 2 (bmedge02) can also successfully ping the Leaf switches. This confirms the Uplink connectivity. We can now proceed with BGP Configuration and deployment of T1 Tenant routers. BGP and related configuration is already covered in my earlier articles. Please visit my home page.

Hope this article was informative. 

Continue Reading? Here is the Multi N-VDS Single-TEP Baremetal Edges Deployment and Configuration:

https://vxplanet.com/2019/05/31/nsx-t-2-4-baremetal-edge-cluster-deployment-and-configuration/

sketch-1565367997315

2 thoughts on “NSX-T Single N-VDS Multi-TEP Baremetal Edges Deployment and Configuration

  1. Hi Hari
    this is a nice write up, but I miss a little bit the required accuracy of the details.
    With multi-TEP per single N-VDS we do not increase the number of ECMP paths. The number of ECMP path in NSX-T system is only depending on the number of edge node. Independent how many uplinks an individual edge node has, the edge node provides only a single next hop to an individual compute transport node T0 DR. With Multi-TEP you provide higher level of resiliency, as your BM can still communicate over GENEVE when a ToR switch or the link goes down. For your reference, please spend a few minutes on this blog page: https://communities.vmware.com/people/oziltener/blog/2019/08/06/nsx-t-ecmp-edge-node-connectivity

    It is important, that you clearly mention that only Bare Metal edge node support multi-TEP on a single N-VDS with NSX-T 2.4.0/1. VM-based edge node does not support it today.
    Multi-TEP is a recommended option for 2 pNIC hosts, but the multi N-VDS design is still valid on hosts with more than 2 fastpath interfaces, because it makes traffic steering for the uplink VLAN very simple. In the 2 pNIC host we cannot easily control the uplink traffic, as this uplink traffic follows your default teaming policy, which is Load Balance Source by default. For such cases we could leverage “VLAN pinning/Named Teaming Policies” and enforce uplink traffic of VLAN 60 to the left ToR and VLAN 70 traffic to the right ToR with active/standby teaming policy. With a 2 pNIC design is bandwidth management difficult, as you carry GENEVE and uplink traffic on the same physical link.
    On hosts with 4 fastpath interfaces as example, I personally prefer to have multi-TEP on the overlay N-VDS only, but deploy still two additional VLAN-based N-VDS for the uplink VLANs. This is a perfect design when you like to have best control over link utilization, as uplink traffic and TEP traffic is on different physical links. Keep in mind, customer buy Bare Metal edge node mainly for throughput reasons (there are other good reasons…), so it is important to manage bandwidth utilization in day2 operations.

    This sentence is really not good: It is possible to have up to 8 Tier 0 Uplinks (8 ECMP paths) per Bare Metal Edge node! What is the benefit to have 8 uplinks on a single edge node? This is not practical. We see in the field typically two physical ToR switches, each of them with a single uplink VLAN to carry the BGP traffic, so each edge node has only two uplink along with the two BGP peering. Please do not mix the number of uplink on a individual edge node with the ECMP capability of the T0 LR.

    What is important to mention for the readers, is that you should have only exact that number of fast path interfaces which you effectively will use. Otherwise, when you use only 2 of the 4 fastpath interfaces, then the NSX-T manager will show you always “degraded” state, as you show in one of your diagrams. We will fix this minor issue in the future NSX-T releases.
    Please do not use the term VTEP, V is VXLAN. NSX-T GENEVE Tunnel Endpoint are called TEP.
    Maybe it is worth to write a few words about MTU in the uplink profiles.
    Best Regards Oliver

Leave a Reply to Oliver ZiltenerCancel reply