NSX-T Single N-VDS Multi-TEP Baremetal Edges Deployment and Configuration

With the release of version 2.4, NSX-T now supports a Single N-VDS and Multi-TEP Baremetal Edge Configuration. With a Multi-TEP design, we could achieve redundancy when a TEP uplink fails instead of failing over to the Stand-by Edge node. This is the recommended configuration when we have two or more DPDK interfaces for the Baremetal Edges. The upper limit for the number of DPDK interfaces is currently 8 which also means a higher number of ECMP paths when the Edge cluster is deployed in Active-Active mode.

I have written a post on Multi-NVDS Single-TEP Baremetal Edge design earlier, in case you are interested it is available in the below link. Even though the below design works, but a multi-TEP design is currently recommended from version 2.4 onwards.


There is a separate Hardware Compatibility matrix for the Baremetal Edge hardware. Make sure that the hardware we are using is listed in the below URL.



If the hardware or any specific component (NICs for example) is not listed, it is most likely the deployment would fail or we would see unexpected issues.

Baremetal Edges doesn’t support UEFI BIOS, so make sure that the BIOS mode is set to Legacy.

In this post, we will cover the architecture, deployment, configuration and validation of Baremetal Edge cluster leveraging a Single-NVDS and Multi-TEP design.

Let’s get started.

Single-NVDS Multi-TEP Baremetal Edge Architecture 

[Click here for HQ Image]


The baremetal Edges are the same DellEMC Poweredge R630 servers that I used in my earlier post with slight hardware changes to meet the compatibility. 


It has 2 DPDK interfaces for the N-VDS and a dedicated management NIC. Since release 2.4, management can also be configured on fastpath interfaces eliminating the need for a dedicated NIC, but I haven’t seen a reference for that configuration so far. Let’s do that as a future task.

Edges are configured on two Transport zones – one Overlay and one VLAN both leveraging the same N-VDS. VLAN Tag for the Overlay TEP encapsulation is applied by the Edge Uplink Profile and for the Uplinks are applied by the VLAN logical segments. It is possible to have up to 8 Tier 0 Uplinks (8 ECMP paths) per Baremetal Edge node.

Here is the IP and VLAN information which is used.

VLAN 10 – Management –

VLAN 40 – TEP –

VLAN 60 – Uplink –

VLAN 70 – Uplink

Deploying the first Baremetal Edge Node

Set the BIOS mode to Legacy. UEFI mode is currently not supported.


Boot from the NSX-T Edge ISO image and select ‘Automated Install’.


This is a quick overview of the automated deployment workflow. 


Select the NIC to be used as the management interface. This should be the one connected to Untagged VLAN 10 switchport


By default, the installer looks for a DHCP server to assign IP details to the management interface. If it doesn’t find any, it prompts for manual input.


Provide the IP, Default gateway and DNS settings. Optionally it is good to have a host record pre-created in DNS for the Edge node.

The disks are partitioned, and the packages installed.


Once the installation is over, the node reboots and we are presented with the login screen.


Login with admin credentials (password : default). We will be prompted to reset the password and then the services initializes. We can expect one more reboot here.


Once the node is back up, perform basic checks like interface status , connectivity, name resolution etc.



Enable ssh for the Edge node.


Connect to the Edge node via ssh.


Change the hostname to bmedge01


Joining the Baremetal Edge to the NSX-T Management Plane

Get the Certificate Thumbprint from one of the NSX-T manager nodes.


Join the Edge node to the management plane

[Click here for HQ Image]


The Edge node will be now visible under ‘Edge Transport Nodes’ in NSX-T manager UI.

[Click here for HQ Image]


Configuring the Baremetal Edge node as a Transport node

We will configure the Edges on two Transport zones – Overlay TZ and VLAN TZ both leveraging the same N-VDS (called NVDSCompute01)

[Click here for HQ Image]


We will create a Custom Multi-TEP Edge Uplink profile with Teaming policy set to Loadbalance. TEP Encapsulation Transport VLAN is 40.



Now, we will configure the Edge as a Transport Node


Here we associate the two N-VDS uplinks to DPDK interfaces of the Edge node.

 The configuration should now succeed.

[Click here for HQ Image]


See that the node status shows as ‘Degraded’. Nothing to worry here because the server that I used had 5 DPDK Interfaces and I used only 2 of them. We could utilize all of them (to a maximum of 8), but lets deploy with 2 for this article.

[Click here for HQ Image]


[Click here for HQ Image]



Deploying the Second Baremetal Edge Node

The process is exactly similar to the above except that the Management Interface IP address is different (

[Click here for HQ Image]


Creating the Baremetal Edge Cluster


[Click here for HQ Image]


Verify the Geneve Tunnel Status between the Edge nodes in the cluster.

[Click here for HQ Image]



Creating the VLAN Logical segments for the Edge Uplinks

We will create two VLAN Logical segments for the Edge Uplink connectivity. One on VLAN 60 and the other on VLAN 70. Note that they should be created on the same VLAN Transport zone on which the Edges are configured.

[Click here for HQ Image]


[Click here for HQ Image]


Deploying a Tier 0 Gateway and Post-validation

We will now deploy a Tier 0 Gateway with 4 uplinks (over both of the baremetal edge nodes) leveraging the VLAN Logical segments that we created before. 

[Click here for HQ Image]


[Click here for HQ Image]


Each Baremetal Edge will have two Uplinks each over the two VLANs – VLAN 60 and VLAN 70. We should be able to ping the Leaf switches from both the Edge nodes.

Login to one of the Edge node (bmedge01) via ssh and verify the interfaces and connectivity to Leaf switches.

[Click here for HQ Image]




SUCCESS!!! Edge node 1 (bmedge01) can now successfully ping the Leaf switches. Repeat the same from the other Edge node.


SUCCESS!!! Edge node 2 (bmedge02) can also successfully ping the Leaf switches. This confirms the Uplink connectivity. We can now proceed with BGP Configuration and deployment of T1 Tenant routers. BGP and related configuration is already covered in my earlier articles. Please visit my home page.

Hope this article was informative. 

Continue Reading? Here is the Multi N-VDS Single-TEP Baremetal Edges Deployment and Configuration:



2 thoughts on “NSX-T Single N-VDS Multi-TEP Baremetal Edges Deployment and Configuration

  1. Hi Hari
    this is a nice write up, but I miss a little bit the required accuracy of the details.
    With multi-TEP per single N-VDS we do not increase the number of ECMP paths. The number of ECMP path in NSX-T system is only depending on the number of edge node. Independent how many uplinks an individual edge node has, the edge node provides only a single next hop to an individual compute transport node T0 DR. With Multi-TEP you provide higher level of resiliency, as your BM can still communicate over GENEVE when a ToR switch or the link goes down. For your reference, please spend a few minutes on this blog page: https://communities.vmware.com/people/oziltener/blog/2019/08/06/nsx-t-ecmp-edge-node-connectivity

    It is important, that you clearly mention that only Bare Metal edge node support multi-TEP on a single N-VDS with NSX-T 2.4.0/1. VM-based edge node does not support it today.
    Multi-TEP is a recommended option for 2 pNIC hosts, but the multi N-VDS design is still valid on hosts with more than 2 fastpath interfaces, because it makes traffic steering for the uplink VLAN very simple. In the 2 pNIC host we cannot easily control the uplink traffic, as this uplink traffic follows your default teaming policy, which is Load Balance Source by default. For such cases we could leverage “VLAN pinning/Named Teaming Policies” and enforce uplink traffic of VLAN 60 to the left ToR and VLAN 70 traffic to the right ToR with active/standby teaming policy. With a 2 pNIC design is bandwidth management difficult, as you carry GENEVE and uplink traffic on the same physical link.
    On hosts with 4 fastpath interfaces as example, I personally prefer to have multi-TEP on the overlay N-VDS only, but deploy still two additional VLAN-based N-VDS for the uplink VLANs. This is a perfect design when you like to have best control over link utilization, as uplink traffic and TEP traffic is on different physical links. Keep in mind, customer buy Bare Metal edge node mainly for throughput reasons (there are other good reasons…), so it is important to manage bandwidth utilization in day2 operations.

    This sentence is really not good: It is possible to have up to 8 Tier 0 Uplinks (8 ECMP paths) per Bare Metal Edge node! What is the benefit to have 8 uplinks on a single edge node? This is not practical. We see in the field typically two physical ToR switches, each of them with a single uplink VLAN to carry the BGP traffic, so each edge node has only two uplink along with the two BGP peering. Please do not mix the number of uplink on a individual edge node with the ECMP capability of the T0 LR.

    What is important to mention for the readers, is that you should have only exact that number of fast path interfaces which you effectively will use. Otherwise, when you use only 2 of the 4 fastpath interfaces, then the NSX-T manager will show you always “degraded” state, as you show in one of your diagrams. We will fix this minor issue in the future NSX-T releases.
    Please do not use the term VTEP, V is VXLAN. NSX-T GENEVE Tunnel Endpoint are called TEP.
    Maybe it is worth to write a few words about MTU in the uplink profiles.
    Best Regards Oliver


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s