In Part 1 we discussed how to implement NSX-T L2 bridging between Overlay & VLAN segments using ESXi Bridge Clusters. If you didn’t get a chance to go through my previous article, you can find it here:
In this post we will discuss how to implement L2 bridging using NSX-T Edge Clusters. To iterate, the benefits of using NSX-T Edge Clusters rather than using ESXi Bridge Clusters are:
- Leverage DPDK acceleration for the bridged traffic
- Leverage the benefits of Bridge Firewall
- Avoid bottlenecks on the ESXi hosts with dedicated bridge instances
Note that L2 bridging with NSX-T Edge Transport nodes uses Bridge Profiles. Each Bridge profile specifies which Edge Cluster to use and which Edge Transport node serves as the Active or Passive Bridge for the instance. The same 1:1 bridge relationship between an Overlay VNI with VLAN ID applies here as well. You can create multiple Bridge Profiles with alternating Active and Passive Edge Transport Nodes and attach to different bridging segments so that we could achieve some kind of load sharing between different bridge instances that uses the same Edge Cluster. This means that, the first bridge instance uses Edge node 1 as Active and second bridge instance use Edge 2 as Active and so on.
Lets get started:
Overlay – VLAN L2 Bridging with Edge Clusters
Verifying the Edge Cluster Status
Login to NSX-T Manager. Navigate to System -> Fabric -> Nodes ->Edge Transport Nodes and verify that the Edge nodes in the cluster are healthy and that the Node status is up.
Enabling Promiscous mode for the vSphere DVS Port Group (Edge Uplinks)
Since we are using NSX-T Edges in VM form factor on ESXi hosts, we need to enable Promiscous mode and ‘Forged Transmits’ for the VDS Port Groups which are the Uplinks for the Edge VMs.
Creating the Edge Bridge Profile
Navigate to System -> Fabric -> Profiles -> Edge Bridge Profiles and Create a new Edge Bridge Profile.
Lets put the first Edge node, ‘nsx-edge’ as Active and second node ‘nsx-edge2’ as Passive for the L2 Bridge instance. For subsequent bridge profiles we can choose the Active and Passive nodes alternatively to achieve some kind of load sharing for the bridged traffic.
For the second bridging profile, we can use the second Edge node as Active like below:
Note that if you are using multiple Bridge Profiles, set the ‘Preemption’ to ‘Enabled’ so that after a reboot of Edge nodes the Active-Passive placements could be maintained. Otherwise the Active role switches over to the first Edge node that boots up and comes online.
Creating an Overlay Segment for Bridging
In the Simplified UI, Navigate to Networking -> Segments and create a new Overlay Segment.
Associating the Overlay Segment with the Edge Bridge Profile
Navigate to the Advanced UI -> Networking -> Switching -> Switches.
Specify the VNI to VLAN mapping
Let’s map the VNI for this logical segment to VLAN 60. The subnet used for VLAN 60 is 192.168.60.0/24 with the Default Gateway 192.168.60.100 pointing to the L3 ToR. Similar to the Part 1, we have two approaches here to decide a DG for the machines on this Bridged segment.
- Use the External Default gateway (192.168.60.100) for all the VMs on the Overlay and VLAN Segments.
- Attach the Overlay Segment to a T1 /T0 NSX-T Logical Router and use this as the DG for the VMs on Overlay as well as VLAN segments.
Like Part 1, let’s cover both scenarios here:
Verifying the Bridge Firewall
Let’s verify the Bridge firewall to ensure that traffic is allowed between the Overlay and VLAN wings on the bridge.
The default ‘Allow any’ rule is good to go. If we need to put some granular restrictions on the L2 Bridge, we can insert additional rules here (on top of the Default rule)
Scenario 1 – Using External Default Gateway
Here we point all the VMs on the bridged overlay segment and on the VLAN segment to use the External Default Gateway (L3 ToR switches).
For a VM on the ESXi host, change the Network adapter to the bridged logical segment.
Perform a connectivity test to make sure we can reach the Default gateway and to other machines on the L2 Bridge.
Scenario 2 – Using T1 / T0 Logical Router as the Default Gateway
We will attach the bridged logical segment to a Tier 1 router which in fact is connected to a Tier 0 router as its uplink.
We will choose a different subnet for this bridged network , lets choose 192.168.100.0/24 with the Default gateway as 192.168.100.1 (VIF of the T1 router)
On a VM on the VLAN 60 segment, set the IP parameters and perform a ping test.
Now that the VMs on the VLAN segment can use the T1/T0 router as the Default Gateway, they can now leverage NSX-T services (Firewalling, NAT etc).
This concludes the two-part article. I hope the series was informative.
Thanks for reading.
Missed Part 1? Here it is: