With the release of version 2.4, NSX-T now supports a Single N-VDS and Multi-TEP Baremetal Edge Configuration. With a Multi-TEP design, we could achieve redundancy when a TEP uplink fails instead of failing over to the Stand-by Edge node. This is the recommended configuration when we have two or more DPDK interfaces for the Baremetal Edges. The upper limit for the number of DPDK interfaces is currently 8 which also means a higher number of ECMP paths when the Edge cluster is deployed in Active-Active mode.
I have written a post on Multi-NVDS Single-TEP Baremetal Edge design earlier, in case you are interested it is available in the below link. Even though the below design works, but a multi-TEP design is currently recommended from version 2.4 onwards.
There is a separate Hardware Compatibility matrix for the Baremetal Edge hardware. Make sure that the hardware we are using is listed in the below URL.
If the hardware or any specific component (NICs for example) is not listed, it is most likely the deployment would fail or we would see unexpected issues.
Baremetal Edges doesn’t support UEFI BIOS, so make sure that the BIOS mode is set to Legacy.
In this post, we will cover the architecture, deployment, configuration and validation of Baremetal Edge cluster leveraging a Single-NVDS and Multi-TEP design.
Let’s get started.
Single-NVDS Multi-TEP Baremetal Edge Architecture
The baremetal Edges are the same DellEMC Poweredge R630 servers that I used in my earlier post with slight hardware changes to meet the compatibility.
It has 2 DPDK interfaces for the N-VDS and a dedicated management NIC. Since release 2.4, management can also be configured on fastpath interfaces eliminating the need for a dedicated NIC, but I haven’t seen a reference for that configuration so far. Let’s do that as a future task.
Edges are configured on two Transport zones – one Overlay and one VLAN both leveraging the same N-VDS. VLAN Tag for the Overlay TEP encapsulation is applied by the Edge Uplink Profile and for the Uplinks are applied by the VLAN logical segments. It is possible to have up to 8 Tier 0 Uplinks (8 ECMP paths) per Baremetal Edge node.
Here is the IP and VLAN information which is used.
VLAN 10 – Management – 192.168.10.0/24
VLAN 40 – TEP – 192.168.40.0/24
VLAN 60 – Uplink – 192.168.60.0/24
VLAN 70 – Uplink 192.168.70.0/24
Deploying the first Baremetal Edge Node
Set the BIOS mode to Legacy. UEFI mode is currently not supported.
Boot from the NSX-T Edge ISO image and select ‘Automated Install’.
This is a quick overview of the automated deployment workflow.
Select the NIC to be used as the management interface. This should be the one connected to Untagged VLAN 10 switchport
By default, the installer looks for a DHCP server to assign IP details to the management interface. If it doesn’t find any, it prompts for manual input.
Provide the IP, Default gateway and DNS settings. Optionally it is good to have a host record pre-created in DNS for the Edge node.
The disks are partitioned, and the packages installed.
Once the installation is over, the node reboots and we are presented with the login screen.
Login with admin credentials (password : default). We will be prompted to reset the password and then the services initializes. We can expect one more reboot here.
Once the node is back up, perform basic checks like interface status , connectivity, name resolution etc.
Enable ssh for the Edge node.
Connect to the Edge node via ssh.
Change the hostname to bmedge01
Joining the Baremetal Edge to the NSX-T Management Plane
Get the Certificate Thumbprint from one of the NSX-T manager nodes.
Join the Edge node to the management plane
The Edge node will be now visible under ‘Edge Transport Nodes’ in NSX-T manager UI.
Configuring the Baremetal Edge node as a Transport node
We will configure the Edges on two Transport zones – Overlay TZ and VLAN TZ both leveraging the same N-VDS (called NVDSCompute01)
We will create a Custom Multi-TEP Edge Uplink profile with Teaming policy set to Loadbalance. TEP Encapsulation Transport VLAN is 40.
Now, we will configure the Edge as a Transport Node
Here we associate the two N-VDS uplinks to DPDK interfaces of the Edge node.
The configuration should now succeed.
See that the node status shows as ‘Degraded’. Nothing to worry here because the server that I used had 5 DPDK Interfaces and I used only 2 of them. We could utilize all of them (to a maximum of 8), but lets deploy with 2 for this article.
Deploying the Second Baremetal Edge Node
The process is exactly similar to the above except that the Management Interface IP address is different (192.168.10.164/24)
Creating the Baremetal Edge Cluster
Verify the Geneve Tunnel Status between the Edge nodes in the cluster.
Creating the VLAN Logical segments for the Edge Uplinks
We will create two VLAN Logical segments for the Edge Uplink connectivity. One on VLAN 60 and the other on VLAN 70. Note that they should be created on the same VLAN Transport zone on which the Edges are configured.
Deploying a Tier 0 Gateway and Post-validation
We will now deploy a Tier 0 Gateway with 4 uplinks (over both of the baremetal edge nodes) leveraging the VLAN Logical segments that we created before.
Each Baremetal Edge will have two Uplinks each over the two VLANs – VLAN 60 and VLAN 70. We should be able to ping the Leaf switches from both the Edge nodes.
Login to one of the Edge node (bmedge01) via ssh and verify the interfaces and connectivity to Leaf switches.
SUCCESS!!! Edge node 1 (bmedge01) can now successfully ping the Leaf switches. Repeat the same from the other Edge node.
SUCCESS!!! Edge node 2 (bmedge02) can also successfully ping the Leaf switches. This confirms the Uplink connectivity. We can now proceed with BGP Configuration and deployment of T1 Tenant routers. BGP and related configuration is already covered in my earlier articles. Please visit my home page.
Hope this article was informative.
Continue Reading? Here is the Multi N-VDS Single-TEP Baremetal Edges Deployment and Configuration: