Implementing Multicast Routing between vSphere DVS and DellEMC Networking

vSphere DVS supports IGMP & MLD Snooping functionality which helps in precise and efficient way of handling multicast forwarding by listening to IGMP messages to/from the Virtual machines. The vSphere DVS supports IGMPv1, IGMPv2 and IGMPv3 for IPv4 multicast group addresses & MLDv1 and MLDv2 for IPv6 multicast group addresses. When IGMP Snooping is enabled on the DVS, it dynamically eavesdrop into the IGMP join and leave messages send by the Virtual machines on the DVS Port groups and maintains an L2 multicast table for the multicast group to which the virtual machines subscribed. In this way a multicast traffic is not forwarded by the DVS to the vnic of a virtual machine which has not subscribed to a multicast group. If an IGMP querier is configured on the L3 Core switch or router, the DVS listens to the IGMP membership queries from the router & membership reports from the virtual machines and updates the L2 multicast table accordingly. This means that if a membership report is not sent by the virtual machine in response to an IGMP query, the entry for that virtual machine’s vnic is removed from the DVS multicast table after the hold-time expires. 

Let’s think about two scenarios:

What if a virtual machine which is a receiver of a multicast group is vMotioned to another ESXi host?

When a virtual machine is vmotioned, it’s vnic configuration also migrates. The destination ESXi host sees this configuration and updates the multicast table for the group to ensure multicast traffic reaches the virtual machine. The DVS injects an IGMP query message to the VM and the VM responds to it with a membership report which is seen by DVS, ToR switches and L3 Core / router. In this way the L2 ToR switches updates their member ports with the physical interfaces of the new ESXi host. Thus a vMotioned Virtual machine doesn’t need to wait for the next IGMP query from the router to maintain the subscription to the multicast group. This is done proactively by the DVS.

What if an ESXi host Uplink fails and Virtual machines are failed over to the standby physical uplinks?

In this case, similar to the vMotion scenario, the DVS inject IGMP queries to the virtual machines affected by the failover. The Virtual machine responds with it’s membership report so that multicast receiver presence is known to the physical ToR switches and routers immediately to allow multicast forwarding.

In this article, we will look at how to enable multicast routing between vSphere DVS & Dell Networking ToR and L3 Core switches to allow multicast forwarding between a video steaming server on one VLAN  with receivers on different VLANs. Let’s get started.

Environment Topology

[Click here for HQ Image]

Blog_DVS

This is the topology that is used for this article. The environment has:

  • 3 x ESXi hosts (DellEMC PE R640)
  • 2 x DellEMC Networking L2 switches as ToRs. They are in VLT configuration.
  • 1 x DellEMC Networking L3 Core switch
  • 2 x 10G ESXi host Networking (Uplinks)
  • A single DVS spanning across all the hosts (No NSX 🙂  ) 
  • Other than Infrastructure VLAN Port groups (Management, vMotion, vSAN etc), there are 3 VMNetwork Port Groups – VLAN 11, VLAN 12 & VLAN 13
  • Multicast source is a VM (vStream_Server_01) on VLAN 11 Port Group.
  • Multicast application is a VLC video stream on group 239.2.2.15 running in vStream_Server_01 on VLAN 11
  • Multicast receivers are VMs (vStream_Receiver_01 & vStream_Receiver_03) on VLAN 12 & 13 port groups.

Enabling PIM Multicast routing on the L3 Core Switch

The command “ip multicast-routing” enables multicast routing on the Core switch. In our topology, only VLANs 11, 12 & 13 are participating in Multicasting. We enable PIM Sparse mode on these VLANs (Only Sparse-mode is supported on these switches and there is no reason to use PIM Dense mode). 

1

2

For PIM Sparse-mode, we would need a Rendezvous Point (RP). We use Loopback interface as RP and enable PIM Sparse mode for this Loopback interface.

4

We enable auto-advertisement of RP, so it can be discovered by other PIM speaking multicast routers on the network via a discovery agent. The RP discovery is achieved via a discovery agent running locally or on a different router that listens to RP advertisements and announces to all other PIM speaking routers. The parameter “rp-candidate” enables advertisement of RP and “bsr-candidate” enables the discovery agent (Bootstrap router)

35

We could see a multicast range used by RP advertisements.

6

Enabling IGMP Snooping on the ToR Switches

IGMP Snooping is disabled by default on DellEMC Networking switches. Enabling IGMP Snooping globally will enable IGMP Snooping on all the VLANs configured on the switch. Let’s enable this on ToR1.

10

11

Every VLAN on the switch should be able to see an IGMP Querier for that particular VLAN to periodically confirm the liveness of the multicast receivers. This is usually the L3 Core switch that we configured earlier or a PIM enabled router. If an IGMP Querier is not available for a VLAN, then we could enable IGMP Snooping Querier on the particular VLAN on the Switches. In our case, this is not needed as we have enabled PIM Sparse mode on all the necessary VLANs.

Let’s confirm if the ToR1 switch can see the IGMP Querier.

12

We could see the Router port as PortChannel 100 which is the VLT Trunk interface to the L3 Core.

Let’s enable IGMP Snooping on ToR2. The same way as we did for ToR1.

202122

Enabling IGMP Snooping on the vSphere DVS

This is a Global setting at the DVSwitch level.

2526

 

Configuring a Multicast Source Stream

Multicast source is a VM named “vStream_Server_01” on VLAN 11 Port Group. Multicast application is a VLC video stream on group 239.2.2.15 running in vStream_Server_01.

Let’s configure the VLC video Stream. 

Click Media -> Open Network Stream, Choose the file and click Stream.

3031

Select RTP as the Transport format

32

This is the Multicast Group that we use for the stream – 239.2.2.15. The receivers should subscribe to this group to receive the stream. L2 Receivers receive the stream via IGMP Snooping and L3 Receivers receive the stream via PIM Routing.

3334

By default, the multicast stream is send with a TTL value of 1 which means it can be seen by only one PIM router. If you need to route the stream over multiple PIM hops, increase the TTL value.

35

The multicast stream should now be successful.

36

Configuring a Multicast Stream Receiver

Let’s configure the first Multicast Receiver on VLAN 12 on the same ESXi host as the Multicast Source. It’s not necessary for the receiver to be on same host but to demonstrate a vMotion activity for a multicast Receiver later.

Click Media -> Open Network Stream and enter the RTP address of the Multicast Source.

3738

The receiver should be able to subscribe to the group 239.2.2.15 and receive the stream. 

39

Let’s spin up few more receivers on the same VM and see that they all receive the stream.

40

Verifying the PIM Routing Table on the Core Switch

Let’s look at the PIM routing database for the multicast group 239.2.2.15

15

As we can see in the PIM routing table, there are two entries for group 239.2.2.15. 

  • (*, 239.2.2.15) -> When the Multicast receiver VM on VLAN 12 send the first IGMPv2 Join message, it is in the form (* , G) which means any source and group 239.2.2.15. Look at the flag – It has an F and J bit set, which means if there are multiple paths from the receiver to source, this router will use the Shortest Path Tree (SPT) which can even be a path other than through the RP. In our case, we have only a single path, but still a (S,G) entry will be created to specify the best path between the multicast receiver and source.
  • (192.168.11.182, 239.2.2.15) -> This is the SPT entry between the receiver and source. See that it has the T bit (SPT) set.

This (S, G) entries can be used to apply security filtering in case there is a rogue multicast source streaming on the same multicast group.

We can see in the table that the Incoming Interface is VLAN 11 which is our multicast Source and the Outgoing interface is VLAN 12 which is our multicast destination. 

This is the multicast routing table.

16

Verifying the L2 Multicast Tables on the ToR Switches

The ToR switches maintain an L2 Multicast Table with a mapping of multicast groups and member interfaces. We can see from the table on ToR2 that for the group 239.2.2.15 and for receiver VLAN 12, we have two member ports – one is the ESXi uplink and the other is the link to the Core switch.

23

This is the IGMP Snooping table.

24 The other ToR1 will show the member interface as the VLT Interconnect to ToR2 (PortChannel 1), because the Receiver uses the ESXi uplink which is connected to ToR2. In case of an uplink failure, the port memberships are updated accordingly.

13

14

Spinning up a second Multicast Stream Receiver

Let’s spin up a new Multicast Receiver on VLAN 13 and see how the PIM forwarding table and L2 Multicast tables look like.

When the Core switch receives an IGMP Join message from VLAN 13, it updates the PIM forwarding table with this VLAN 13 entry as an Outgoing interface.

4243

Let’s see the IGMP table on the ToR switches. The table should be updated with VLAN 13 member ports. We will see the same Member port for both VLANs because the Receivers are on the same ESXi host. 

44

The new Receiver on VLAN 13 should be now able to receive the multicast stream 

41 (2)

Let’s vMotion one Receiver to another ESXi host and see what happens.

vMotion of a Multicast Receiver to another ESXi host

Let’s vMotion vStream_Receiver_03 (VLAN 13) to another ESXi host while it is actively subscribed to multicast stream.

46_2

47_2

As explained at the beginning of the article, the Receiver should still be able to maintain the subscription and receive the stream. The DVS injects an IGMP query message to the Receiver VM on the new ESXi host and the VM responds with a membership report which is seen by DVS, ToR and L3 Core / router. In this way the L2 ToR switches updates it’s member ports with the physical interfaces of the new ESXi host.

49

50

There was a slight break of 2 sec during the vMotion, but the receiver could receive the multicast stream.

48

I hope this article was informative. Thanks for reading.

Interested to read further? Here is the Part 2

https://vxplanet.com/2019/09/18/implementing-multicast-routing-between-nsx-edges-and-dellemc-networking/

 

sketch-1565367997315

One thought on “Implementing Multicast Routing between vSphere DVS and DellEMC Networking

Leave a Reply