Hello everyone, I recently got to work on a specific use case requirement where the virtual clusters on the NSX logical segments on a tenant requires access to a storage lake that involves huge amounts of data transfer between the clusters and storage lake. This could be achieved via the DLR-ESG routing path via the ESG appliances but we don’t want to do that, just to avoid a kind of bottleneck in the future. Since the storage lake is meant only for the NSX Private cloud tenants and is connected to the ToR which is local to the datacenter, we don’t want this traffic to be considered as a North-South for the environment. Secondly we wanted an option for East-West routing between the virtual clusters on the VxLAN segments and the physical network on the datacenter. These requirements were met by using the DellEMC Networking 25G S5048-ON switches as a Hardware VTEP at the ToR layer. Logically, this means extending one of the logical VxLAN segments on the NSX platform to the ToR physical layer and bridging it with the VLAN for the storage lake. Any virtual clusters that need access to this storage lake can be added to this bridged logical switch with an additional vNIC. Isolation between VMs or Clusters on this segment can then be achieved with NSX Firewalling and Microsegmentation.
This diagram shows this scenario. This is the Staging environment where I simulated the use case. [Click here for HQ image]
The staging environment had the below configuration:
- 4 X DellEMC PowerEdge R640 vSAN nodes
- 2 X DellEMC Networking S5048-ON ToR switches in VLT
- 1 X DellEMC Networking S3048-ON switch for Out-of-box management
- Storage Boxes connected to ToR on VLAN 50
- NSX-V Enterprise
- Single Tenant
- Single DLR Instance in HA
- Three logical switches and two virtual clusters. Each virtual cluster attach themselves to their own dedicated logical switches. The third logical switch is a shared bridged network to the storage lake.
- Single ESG instance in HA.
Lets look at the configuration now:
VLT configuration on the S5048-ON ToR switches
With VLT, the two ToR switches appear as a single logical switch for the end devices and upstream network. VLT switches have separate management planes, separate control planes (which are in sync with each other) and a single data plane. VLT allows each switch to be managed independently including lifecycle management where each switch can be patched separately. This is unlike stacking when the patches are applied to the stack as a whole.
VLT requires a dedicated high bandwidth interconnect between the nodes for control plane synchorization (L2 and L3) and also for the data plane traffic when a route is down on any of the peers. One of the node will become the VLT Primary node and the other becomes the secondary node. A backup heartbeat link is required to avoid a split brain scenario just in case the interconnect link is down.
In this environment, I used 2 X 100G interfaces in port channel as the VLTi interconnect. The management interfaces are used as the backup link. Each ESXi host had 2 X 10G uplinks to the ToR each going to separate VLT peers. At this time of writing this article, I used Switch independent mode, but I will change this to LACP in couple of days time.
This is the VLT configuration on the Primary and Secondary peers:
Static port-channel for the VLTi interconnect on the 100G interfaces:
Management Interface for the backup link (heartbeat only)
VLT Configuration
Verifying VLT
I haven’t enabled Peer-Routing between the VLT peers, VRRP is used instead.
e-VLT Configuration from S5048-ON ToR to Z9100-ON Aggregation layer
The link between the S5048-ON ToR to the aggregation is an e-VLT link. e-VLT links are used to establish connections between two different VLT domains. This is the configuration of one of the S5048-ON VLT Peers:
LACP port-channel for the e-VLT link on the 100G interfaces:
Two HundredGigE links in Portchannel on each VLT peer in the ToR goes to individual VLT peers on the aggregation layer. So a total of 4 links are established as a single port-channel connection between the two VLT domains. The command “vlt-peer-lag port-channel” will convert the link to a VLT Port channel and this is very important so that the VLT domains sees each other as single logical units.
Verifying e-VLT port channel
The same has to be performed on the other VLT peer as well as on the other VLT domain.
VLAN and SVI Configuration
The below VLANs are created with respect to the nature of traffic. The VLAN configuration is identical to the VLT peer, except that the peer as the next IP address for each subnet.
VRRP Configuration
To have a common default gateway for the end devices, VRRP is used. VRRP is not required if you enable peer-routing between the VLT peers. With peer-routing, each VLT peer acts as a proxy gateway for its peer and handles routing for traffic intended for it’s peer’s IP Address. I used VRRP to maintain a consistency in the default gateway settings for the VMs as well as in the DHCP scopes that I create later. VRRP in a VLT domain will work as an Active-Active mechanism, ie, each VLT Peer will be in a forwarding state for routing.
Now that we have completed the necessary configuration of the S5048-ON ToR switches. Lets move on to Part 2 to setup the VxLAN Gateway on the ToR.
Thanks for reading.