Deploying the NSX-T 2.4 Edge VM Cluster leveraging vSphere DVS PortGroups


NSX-T Edge Clusters are responsible for the North-South traffic flow in the NSX-T SDDC. They can be deployed in VM Form Factor as well as Bare metal leveraging the DPDK libraries for high network performance. Edge Clusters host the DR and SR components of the logical routers that you deploy and is mandatory if you want to deploy a Tier0 router or Tier1 router with NAT and Edge Firewall capabilities. This article walks through the process of deploying NSX-T Edge Clusters in VM form factor leveraging the vSphere Distributed vSwitch PortGroups. Let’s get started.

I have a collapsed Management and Edge Cluster (4 hosts) and a dedicated Compute Cluster (3 hosts) for the overlay Workloads. The NSX-T Edges are deployed on the Edge Cluster (in fact, it is also the Management cluster). Only the ESXi hosts in the Compute Cluster are Prepared for NSX-T. The Management/Edge cluster ESXi hosts are unprepared.They leverage the vSphere networking.

This is how the Edge VM Networking on the DVSwitch looks like:

 

EdgeNetworking

Each Edge VM is deployed with 4 vNICs:

  • Management (named as eth0)
  • Overlay Transport. This vNIC will be configured with the TEP IP to participate as a Transport node. (named as fp-eth0)
  • Uplink 1 (named as fp-eth1)
  • Uplink 2 (named as fp-eth2)

fp stands for Fast-Path. On bare metal deployments, these NICs can leverage the DPDK libraries for higher network performance. This is useful when the overlay workloads demands higher network throughput as well as when you do Overlay-VLAN bridging on the Edge nodes.To get the benefit of DPDK, you need to deploy N-VDS in Enhanced mode.

Transport Zones

I’ve 4 Transport Zones created:

  • TZ-Overlay -> This is the Transport Zone for the Overlay Networking. All the hosts in the Compute cluster and Edge nodes are part of this Overlay Transport Zone
  • TZ-VLAN-Infrastructure -> This is a VLAN Transport zone intended for host networking. It is used to migrate the host networking (all vmkernel ports) sitting on the DvSwitch to the N-VDS deployed on the Compute ESXi hosts. For each of the DvPort Groups on the Compute host, corresponding VLAN Logical switches are created on this Transport zone and the host networking is migrated to N-VDS using the vmkernel port migration wizard from NSX-T. In this way, all the networking is completely decoupled from vCenter. The Edge nodes are not a part of this Transport zone.
  • TZ-Edge-Uplink1 -> This transport zone is only for the Edge VMs for Uplink connectivity. It connects to an Uplink port group on VLAN 50
  • TZ-Edge-Uplink2 -> This transport zone is only for the Edge VMs for Uplink connectivity. It connects to an Uplink port group on VLAN 60

1

[Click for HQ Image]

Deploying the First Edge VM

It is easy to deploy Edge VMs from the NSX-T manager once you have added vCenter Server as a Compute Manager in the NSX-T Manager console.

2

[Click for HQ Image]

To deploy Edge VM, Navigate to System -> Fabric -> Nodes -> Edge Transport Nodes and click ‘Add Edge VM’.

3

Make sure that you have updated the DNS entries for the Edge VM FQDN. Else registration in the NSX manager will fail. I have selected a Medium form factor for this deployment

4

Select the Cluster,Resource Pool or ESXi host where the Edge VM needs to be deployed.

5

Select the networking parameters and interface for the Management interface eth0. Since Management VLAN is 10, the tagging is applied at the DvPort Group level.

6

The next page will prompt for Transport parameters for the Edge VM to be configured as  a Transport node. The way we do the configuration here is important. We add 3 N-VDS on the Edge node. For each N-VDS, we map a vNIC that connects to the DvPort Group.

  • N-VDSCompute01 -> This belongs to the Overlay TZ and is used for the Overlay Transport. Edge VTEP is configured on this vNIC.
  • N-VDSUplink1 -> This belongs to the VLAN TZ. It is used for the uplink connectivity to DvPort Group on VLAN 50 
  • N-VDSUplink2 -> This belongs to the VLAN TZ. It is used for the uplink connectivity to DvPort Group on VLAN 60 

You also have to choose an Edge Uplink profile which specifies the Teaming policies and Transport VLAN. We will go to that one this step is completed.

8.png

Click “Add N-VDS” to add the Uplink N-VDS switches

9

10

Now that we have the transport N-VDS and Uplink N-VDS’s parameters set, click Finish to start the Edge VM deployment and it’s configuration as a Transport Node.

11

12

[Click for HQ Image]

Once the deployment and configuration as a Transport node is completed, ensure that the Transport node connectivity with the NSX manager and Controllers are good.

13

Deploying the Second Edge VM

The process is similar to above. Make sure that you deploy the second edge VM only after the successful completion of the first one.

Creating an Edge Cluster

Add both of the NSX-T Edges to an Edge Cluster. You can have upto 8 Edges in the Cluster.

14

Once the Edges are added to Edge Cluster, both Edges establish a Geneve Tunnel between them. You should see that the Tunnel status is up.

15

Edge Profiles

Edge Profiles need to be created before deploying the Edge VMs. Things to note here:

  • The Teaming policy should not be “Active-Standby”. It is not supported on Edges
  • The Overlay Transport VLAN is 40 in my case. For Edges deployed in VM Form factor, the tagging is applied at the DvPort Group level. So DO NOT put a Transport VLAN in the Edge Profile.

16

[Click for HQ Image]

Creating a Tier0 Logical Router

We now create a Tier0 Logical Router and do some functionality testing on the Edge Nodes. Without Logical Routers deployed, the Edge nodes are Empty Containers. The SR Components of the logical routers sits inside the Edge nodes. Navigate to “Advanced Networking & Security” -> Routers and Click to create a Tier 0 Logical Router.

17

18

Adding Uplink interfaces to the Tier 0 router

When you add Uplink Interfaces, this creates an SR component on the Edge nodes. We create two uplink interfaces on the Tier0 router – one with Edge01 as the Transport node and other with Edge02 as the transport node. These Uplink interfaces attach to a VLAN Logical switch on the Uplink Transport Zone for external access.The Uplink ports and  the VLAN logical switch can be created in one go.

19Since the VLAN Tagging happens at the DvPort Group level, make sure that you DO NOT put a VLAN identifier in this Logical Switch

20

21

Similarly add a second uplink interface with Edge02 as the Transport node.

22

Now, our Tier0 router has two uplinks on a single VLAN 50 but connecting via two Edge nodes. It’s optional to add another two uplinks to the Tier0 router for VLAN 60. 

Now see that the Tier0 router has got its SR components on the Edge nodes.

23

Connecting to Edge nodes and verifying external connectivity

SSH to an Edge node and get the VRF ID for the SR Component of the Tier0 router.

24

It’s VRF 1, So I will connect to VRF 1 and ping to my Default Gateway on the ToR switch.

25

SUCCESS!!!

Let’s connect to the other Edge node and test the same.

26

SUCCESS!!!

More Testing – Adding an Overlay Logical Switch to the Tier 0 router

I created an Overlay logical switch and attached two VMs to it – VM1 & VM2. The overlay network is 192.168.250.0/24

27

Let’s add this Overlay logical switch as a Downlink to the Tier0 router.

28

This IP address on the Downlink port becomes the Default gateway of the VMs in the Overlay network.

Adding a Static Route on Tier0 router to reach External networks

For simplicity, lets add a Default route on the Tier0 router to reach external networks.

29

I added both uplinks with same AD value, so outbound traffic will be load balanced (Round Robin)

Adding SNAT rule for External access to Overlay Networks

Lets add a SNAT rule, so that the Overlay network is translated into a routable address at the Tier0 router.

30

Connectivity Test from the Overlay Network

Login to VM1 or VM2 and ping the Default Gateway on the ToR switches

31

Now that validates the Edge Cluster deployment. You can now do a cleanup activity, ie deleting the NAT rule, Static route, remove the overlay logical switch from the Tier0 router. You can now start building your logical Toplogy – bring in Tier1 routers (Tenants), enable BGP, Route Redistribution and so on.

I hope this post was useful. Thanks for reading.

Harikrishnan T

NSX

 

 

 

2 thoughts on “Deploying the NSX-T 2.4 Edge VM Cluster leveraging vSphere DVS PortGroups

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s