NSX-T supports L2 bridging between Overlay logical segments and physical VLAN segments. Unlike NSX-V, NSX-T doesn’t have support for hardware VTEPs (because the ToRs are yet to support the Geneve encapsulation, they are still on VxLAN). With NSX-T, for these requirements, the option is to do an L2 Bridging on the Edges leveraging DPDK acceleration for the bridged traffic. There are some use cases where we need an L2 stretch-ability to the VLAN segments like:
- Migration of physical workloads to virtual where changing the IP settings is not practical because of numerous areas which need to be updated like hardcoded scripts, firewall rules, web publishing rules, NAT rules etc.
- Support for some Mixed topology scenarios. For Eg: an external loadbalancer with Overlay VMs as the Real Servers and with Transparency enabled for the Virtual service requires that the Real servers (on the Overlay network) to be on the same L2 segment as the LB backend interface. This is because the return traffic should flow through the loadbalancer and hence the DG for the real servers should point to the LB backend interface.
- Physical workloads on the VLAN segments can leverage the NSX security services by having the traffic routed over a T1 or T0 Gateway.
In most cases, the requirement for an Overlay-VLAN bridging wont arise that much. If at all Possible, “Route” the traffic and use “Bridging” only when it is necessary.
NSX-T Offers two options for L2 Bridging:
- Using ESXi Bridge Clusters
- Using L2 Bridging on the NSX-T Edge Cluster.
It is recommended to use the Edge Bridging (preferably Baremetal Edges) due to the below reasons:
- Leverage DPDK acceleration for the bridged traffic
- Leverage the benefits of Bridge Firewall
- Avoid bottlenecks on the ESXi hosts with dedicated bridge instances
In this article, we will walk though the first option – ie, Using ESXi Bridge Clusters. I will cover the Second option in my next article. Some things to note before we get started.
- ESXi host transport nodes uses bridge clusters
- NSX-T Edge Transport nodes use Bridge Profiles
- Bridging is a 1:1 relationship between an Overlay VNI and a VLAN segment.
- One ESXi transport host cannot be a part of two ESXi Bridge Clusters.
- ESXi host Transport nodes that are a part of Bridge Cluster should have one N-VDS only.
Overlay – VLAN L2 Bridging with ESXi Bridge Clusters
Creating a Bridge Cluster
Login to NSX-T Manager. Navigate to System -> Fabric -> Nodes ->ESXi Bridge Clusters
The ESXi Transport nodes work in Active-Passive mode for the Bridge instances. It is preferable not to put any workload VMs on those hosts hosting the bridge instance. But it’s not a requirement though.
Creating an Overlay Segment for Bridging
In the Simplified UI, Navigate to Networking -> Segments and create a new Overlay Segment.
Associating the Overlay Segment with the Bridge Cluster
We need to do this from the Advanced UI.
Specify the VNI to VLAN mapping
In this case, we map this Overlay to VLAN 60. The IP subnet used for the VLAN 60 is 192.168.60.0/24 with 192.168.60.100 as the DG (physical L3 ToR). Being an L2 bridge, all the VMs on this Overlay segment and VLAN segment should use the same IP schema. For deciding the Default Gateway, we have two approaches here:
- Use the External Default gateway (192.168.60.100) for all the VMs on the Overlay and VLAN Segments.
- Attach the Overlay Segment to a T1 /T0 NSX-T Logical Router and use this as the DG for the VMs on Overlay as well as VLAN segments.
The decision on which approach to use is dependent on our requirements. I will cover both scenarios here.
Scenario 1 – Using External Default Gateway
Attach VM to the Bridged Overlay Segment
Here, we will point all the VMs on the overlay segment to use the External Default Gateway configured on the ToR Switches and do a connectivity test.
SUCCESS!!
Scenario 2 – Using T1 / T0 Logical Router as the Default Gateway
We will deploy a Tier 1 Logical Router with Uplink to a Tier 0 Logical router. We also enable Route advertisement on the Tier 1 Logical Router.
Now let’s attach the Bridged overlay Segment to the Tier 1 Logical Router.
Now let’s configure the IP Settings on a VM that is on the VLAN Segment 60 to use the NSX-T T1 logical router as the Default Gateway.
SUCCESS!!
This shows that the Traffic from the Physical VLAN segment is now routed through the NSX-T logical routers. You can now take advantage of NSX-T services like Gateway firewalling etc.
In the next article, I will cover how to implement the L2 Bridging with NSX-T Edge Cluster. I hope this article was informative.
Thanks for reading.
Continue reading ? Here is the Part 2
https://vxplanet.com/2019/06/12/nsx-t-l2-bridging-between-overlay-vlan-segments-part-2/
This is the only good coherent rundown on this I’ve seen anywhere. Thank you so much. The way you structure your posts is brilliant sir.
LikeLike