NSX-T Edges come in two form factors – VM & Baremetal both leveraging DPDK libraries, and each having it’s own use case requirements. In this article, we will walk through the deployment, configuration and post-validation of NSX-T 2.4 Baremetal Edge cluster.
Before we proceed, make sure that the hardware you are using meets the minimum requirements as listed in the below VMware documentation: If not, its most likely that deployment would fail or you may see unexpected results.
Baremetal Edges doesn’t support UEFI BIOS, so ensure that the mode is set to Legacy.
On the networking side, we need to have a minimum of 3 NICs, with recommendation of 4. In this article I am using 4 NICs and this is how the NICs are assigned.
- eth0 – Management (VLAN 10 Untagged)
- eth1 – Transport Network (TEP is configured on this interface) – VLAN 40
- eth2 – VLAN 1 Uplink – VLAN 60
- eth3 – VLAN 2 Uplink – VLAN 70
I was thinking about the requirement for 4 NICs and this is what I have found. At a minimum, the Edges will be a part of 2 transport zones – One Overlay and One VLAN and hence would need 3 NICs including management.There are 3 N-VDS configured on the Edge nodes in my case – one for the Overlay Transport zone and the other two for the VLAN Uplinks each one on a separate transport zone. Hence we would require 4 NICs.
Each Baremetal Edge is a DellEMC Poweredge R630 server with some slight hardware changes made to adjust with the requirements. Each server is connected to separate VLT ToR switch as per the sketch above. Management interface is eth0 and is Untagged. All the other interfaces are tagged with their respective VLANs.
Lets get started:
Deploying the first Baremetal Edge Node
Make sure to change the BIOS mode to Legacy. UEFI mode is not supported.
Download the NSX-T 2.4.1 Edge ISO and mount it to iDRAC Virtual Console as a Virtual CD.
Boot from the NSX-T Edge ISO
Select ‘Automated Install’. The installation starts.
Select the NIC to be used as the management interface. This should be the one connected to Untagged VLAN 10 switchport
By default, the installer looks for a DHCP server to assign IP details to the management interface. If it doesn’t find any, it prompts for manual input.
Provide the IP, Default gateway and DNS settings. Make sure you have created a host record in DNS for the Edge node.
The disks are partitioned, and the packages installed.
Once the installation is over, the node reboots and presents the login screen.
Login with admin credentials (password : default). You will be prompted to reset the password and then the services initializes. You can expect one more reboot here.
Once the node is back up, perform basic checks like interface status , name resolution etc.
Enable ssh service
Connect to the Edge node via ssh
Joining the Baremetal Edge to the NSX-T Management Plane
Get the Certificate Thumbprint from one of the NSX-T manager nodes.
Join the Edge node to the management plane
Verify the status
Configuring the Edge nodes as Transport nodes
We would need 3 Transport zones – one for Overlay and two for VLAN Uplinks. Each Transport zone will deploy its own N-VDS on the Edge nodes.
We have to create a Custom Edge Uplink profile. Set the Transport VLAN as 40. This is the VLAN of the TEP interface
Now, we will configure the Edges as Transport Nodes
Make sure to configure the 3 N-VDS as well.
Transport Network N-VDS
VLAN Uplink 1 N-VDS
VLAN Uplink 2 N-VDS
Creating the Baremetal Edge Cluster
Deploying the Second Baremetal Edge Node
The process is exactly similar to the above except that the Management Interface IP address is different. Once deployed, add the second Baremetal Edge to the Edge Cluster.
Creating the VLAN Logical segments for the Edge Uplinks
We will create two VLAN Logical segments for the Edge Uplink connectivity. One on VLAN 60 and the other on VLAN 70
Deploying a Tier 0 Logical Router and Post-validation
We will now deploy a Tier 0 Logical Router and attach the two VLAN logical segments to it as Uplinks over both the Baremetal Edges.
Wait for few seconds and confirm that the interfaces are initialized.
Navigate to the Advanced UI and confirm that the SR components are created on the Edges.
Now our T0 router has two uplinks (and the SR components deployed in HA) over the two Edges. At this moment, it should be able to reach the external ToR Default Gateway.
Login to one of the Edge node via ssh and identify the VRF of the SR component.
Ping the External Gateway.
Repeat the same from the other Edge node. Confirm that you can ping the external gateway over VLAN 70.
This confirms the Uplink connectivity. You can now proceed with deploying the Tenant routers and setup BGP configuration.
Hope this article was informative. Thanks for reading.