Posts from ‘IP Multicast’

Feb
09

Five hours of new videos have been added to INE’s CCIE Service Provider Version 3.0 Advanced Technologies Class which cover Multicast VPN and QoS support for Layer 3 MPLS VPNs.  Specifically these videos cover Multicast VPN & QoS theory, configuration, verification, and troubleshooting on both regular IOS and IOS XR, including PIM Sparse Mode, PIM Source Specific Multicast, VRF Aware PIM, Multicast Distribution Trees, BGP MDT exchange, and SP QoS Classification, Marking, Scheduling, and Admission Control.  For the remainder of the month of February these Multicast videos are free for all users to view.  The links can be found below, or from your INE members site download or streaming links for the class.

Enjoy!

Tags: , , , , , ,

Aug
18

Today launches the first of many of our new INE Video Blog series.  Our first video covers the advanced design issues with running Multicast and Auto-RP over NBMA networks such as Frame-Relay Hub-and-Spoke.  Specifically this video focuses on possible solutions such as sparse-dense mode, sparse-mode with auto-rp listener, sparse-mode with a default RP placement, and PIM Join issues related to point-to-point vs multipoint interfaces and their underlying IGP protocols.
Continue Reading

Tags: , , ,

Dec
02

After working with the December 2010 London Bootcamp on Multicast for the better part of Day 4 in our 12-day bootcamp, I returned to the hotel to find the following post on my Facebook page – “Multicast is EVIL!”

Why do so many students feel this way about this particular technology? I think one of the biggest challenges is that troubleshooting Multicast definitely reminds us of just what an “art” solving network issues can become. And speaking of troubleshooting, in the Version 4 Routing and Switching exam, we may have to contend with fixing problems beyond the scope of our own “self-induced” variety. This is, of course, thanks to the initial 2 hour Troubleshooting section which may indeed include Multicast-related Trouble Tickets.

Your very best defense against any issues in the lab exam regarding this technology – the new 3-Day Multicast technology bootcamp. Also, be sure to enjoy the latest free vSeminar from Brian McGahan – Troubleshooting IP Multicast Routing.

Tags: , ,

Sep
18

When we ask students “what are your weakest areas” or “what are your biggest areas of concern” for the CCIE Lab Exam, we typically always here non-core topics like Multicast, Security, QoS, BGP, etc. As such, INE has responded with a series of bootcamps focused on these disciplines.

The IPv4/IPv6 Multicast 3-Day live, online bootcamp, and the associated Class On-Demand version seeks to address the often confounding subject of Multicast. Detailed coverage of Multicast topics for the following certifications is provided:

Cisco Certified Network Professional (CCNP)

Cisco Certified Design Associate (CCDA)

Cisco Certified Design Professional (CCDP)

Cisco Certified Design Expert (CCDE)

Cisco Certified Internetwork Expert Routing & Switching (CCIE R&S)

Cisco Certified Internetwork Expert Service Provider (CCIE Service Provider)

Cisco Certified Internetwork Expert Security (CCIE Security)

To purchase the live and on-demand versions of the course for just an amazing $295 – just click here. The live course runs 11 AM to 6 PM EST US on September 29,30, and October 1.

The preliminary course outline is as follows:

  • Module 1 Introduction to Multicast

Lesson 1 The Need for Multicast

Lesson 2 Multicast Traffic Characteristics and Behavior

Lesson 3 Multicast Addressing

Lesson 4 IGMP

Lesson 5 Protocol Independent Multicast

  • Module 2 IGMP

Lesson 1 IGMP Version 1

Lesson 2 IGMP Version 2

Lesson 3 IGMP Version 3

Lesson 4 CGMP

Lesson 5 IGMP Snooping

Lesson 6 IGMP Optimization

Lesson 7 IGMP Security

Lesson 8 Advanced IGMP Mechanisms


Continue Reading

Tags: , , , ,

Sep
17

Abstract

This publication illustrates some common techniques for troubleshooting multicast issues in IP networks. Common problems and their causes are discussed, troubleshooting techniques demonstrated. PIM Sparse mode is used for most of the examples, due to the fact that this is the most complicated mode of multicast signaling. The suggested troubleshooting approach separates control plane from data-plane troubleshooting and heavily relies on the mroute command for the control-plane verification. This publication requires solid understanding of intra-domain multicast routing technologies.

Common Reasons for Multicast Problems

In short, one common reason for all issues with multicast routing is the PIM and logical/physical topology incongruence. Ideally, multicast should be deployed in a single IGP domain with PIM enabled on all links running the IGP with all links preferably being point-to-point or broadcast multiple-access. If you have multicast running across the domain that has multiple IGPs, or you don not have PIM enabled on all links or finally you have NBMA links in the topology – you have open possibilities for a problem. Unfortunately, the “problem” conditions just described are very common in the CCIE lab exam environment.

The most common type of multicast issue is the RPF Failure. RPF checks are used both at the control and data plane of multicast routing. Control plane involves PIM signaling – some PIM messages are subject to RPF checks. For example, PIM (*,G) Joins are sent toward the shortest path to RP. Next, the BSR/RP address in the BSR messages is subject to RPF check as well. Notice that this logic does not apply to PIM Register messages – the unicast register packet may arrive on any interface. However, RPF check is performed on the encapsulated multicast source to construct the SPT toward the multicast source.

Data plane RPF checks are performed every time a multicast data packet is received for forwarding. The source IP address in the packet should be reachable via the receiving interface, or the packet is going to be dropped. Theoretically, with PIM Sparse-Mode RPF checks at the control plane level should preclude and eliminate the data-plane RPF failures, but data-plane RPF failures are common during the moments of IGP re-convergence and on multipoint non-broadcast interfaces.

PIM Dense Mode is different from SM in the sense that data-plane operations preclude control-plane signaling. One typical “irresolvable” RPF problem with PIM Dense mode is known as “split-horizon” forwarding, where packet received on one interface, should be forwarded back out of the same interface in the hub-and-spoke topology. The same problem may occur with PIM Sparse mode, but this type of signaling allows for treating the NBMA interface as a collection of point-to-point links by the virtue of PIM NBMA mode.
Continue Reading

Tags: , , , , ,

Mar
08

Introduction

This blog post provides example M-VPN configuration using out-of-band mapping of C-multicast groups to core M-LSP tunnels signaled via M-LDP. A reader is assumed to have solid knowledge of M-VPNs and understanding of M-LDP. For people unfamiliar with M-LDP, the following blog post provides some introductory reading: The long road to M-LSPs. The terminology used in this article follows the common convention of using the “P-” prefix for provider specific objects (e.g. multicas routes or tunnels) and using the “C-” prefix for the customer objects (e.g. private IP addresses). Before we begin, I would like to thank our reader Hans Verkerk who pointed me the fact that Cisco IOS does support M-LDP in the latest 12.2SR images.

Interworking M-LDP with PIM: In-Band and Out-of-Band

To recap, M-LDP allows for construction of P2MP (point-to-multipoint) and MP2MP (multipoint-to-multipoint) LSPs. Every LSP is identified by a tuple made of the root node IP address X and an opaque value Y, shared by all leaves. MP2MP LSP is identified by a shared root IP address and an opaque value and consists of downstream and upstream sections. The downstream part is a P2MP LSP rooted at the shared node and the upstream part is a MP2P (multipoint to point) LSP built starting from the root node and to the leaves to allow the latter sending traffic upstream to the root. It is important to note that the components of MP2P LSP are classic unicast LSPs connecting each and every leaf to the root of MP2MP LSP. Look at the diagram below for illustration of MP2MP LSP signaling.

mld-signaled-mvpns-mp2mp-lsp

One of the most prominent applications of M-LSPs is effective multicast traffic encapsulation in MPLS networks. Since M-LSPs are signaled using M-LDP and multicast trees use PIM the problem is mapping multicast trees to M-LSPs. It is intuitively obvious that shortest path trees could be mapped to P2MP LSPs and shared trees correspond to MP2MP LSPs. There are two approaches to implement such mapping: the first uses in-band signaling, where PIM messages are directly translated into M-LDP FEC bindings, and the second approach uses out-of-band mapping, e.g. based on manual configuration.
Continue Reading

Tags: , , , ,

Jan
17

Abstract:

Inter-AS Multicast VPN solution introduces some challenges in cases where peering systems implement BGP-free core. This post illustrates a known solution to this problem, implemented in Cisco IOS software. The solution involves the use of special MP-BGP and PIM extensions. The reader is assumed to have understanding of basic Cisco’s mVPN implementation, PIM-protocol and Multi-Protocol BGP extensions.

Abbreviations used

mVPN – Multicast VPN
MSDP – Multicast Source Discovery Protocol
PE – Provider Edge
CE – Customer Edge
RPF – Reverse Path Forwarding
MP-BGP – Multi-Protocol BGP
PIM – Protocol Independent Multicast
PIM SM – PIM Sparse Mode
PIM SSM – PIM Source Specific Multicast
LDP – Label Distribution Protocol
MDT – Multicast Distribution Tree
P-PIM – Provider Facing PIM Instance
C-PIM – Customer Facing PIM Instance
NLRI – Network Layer Rechability Information

Inter-AS mVPN Overview

Continue Reading

Tags: , , , ,

Oct
31

In this post we will quickly discuss the use of most commonly needed IGMP timers. First, as we know, multicast routers periodically query hosts on a segment. If there are two or more routers sharing the same segment, the one with the lowest IP address is the IGMP querier (per IGMPv2 election procedure – as you remember, IGMPv1 let the multicast routing protocol define the querier).

Continue Reading

Tags: , , ,

May
06

Hi Brian,

I have a problem with the multicast helper topic, the case when a broadcast network is separated by a multicast network, and then again it continues. Can you discuss this topic?

Thanks,

Nizami

Hi Nizami,

The multicast helper-map command is similar in theory to how the unicast “ip helper-map” works. With the IP helper map feature, IP broadcast packets, such as UDP based DHCP requests, have their destination addresses translated to a unicast address, such as the DHCP server. With the IP multicast helper map feature, IP broadcast packets have their destination addresses translated to a multicast address.

The common design application of this feature is in financial trading networks where a legacy stock ticker application sends packets out as broadcast UDP. The router on the attached segment can then convert the broadcast destination to multicast, send the packet into the multicast transit network, and then on the last hop router attached to the receiver translate the multicast packet back to a broadcast. This allows the network to scale above a flat layer 2 design where all application senders and receivers are in the same IP subnet, to a hierarchical layer 3 routed multicast network, without the application itself being modified.

Configuration-wise the feature is implemented on two devices, the first hop router attached to the broadcast sender, and the last hop router attached to the broadcast receiver. The first hop router listens for broadcast packets to be received on the incoming interface attached to the sender. Based on an access-list match (usually the UDP port of the application), the router translates the destination address to a user defined multicast address, and forwards the packet out interfaces running PIM according to the multicast routing table. This design therefore assumes that the underlying PIM topology is built end-to-end. Once the last hop router receives the traffic on the incoming interface facing the multicast network, the traffic is again categorized by an access-list, and additionally by the multicast group used on the first hop. Based on the directed broadcast address defined on the last hop router the traffic is then dropped off on the LAN segment facing the receiver.

In our particular design the network looks like this:

SW1 — R4 -– R3 — R2 — R1 — SW2

SW1 is the broadcast sender (i.e. the source application), SW2 is the receiver (i.e. the destination application), R4 is the first hop router, and R1 is the last hop router. IGP and PIM adjacencies exist between R4 – R3, R3 – R2, and R2 – R1.

R4’s configuration, the first hop router, looks as follows:

R4#
interface FastEthernet0/0
 description TO SENDER APPLICATION – SW1
 ip address 173.20.47.4 255.255.255.0
 ip multicast helper-map broadcast 224.1.2.3 100
!
ip forward-protocol udp 31337
access-list 100 permit udp any any eq 31337

This configuration means that if R4 receives a UDP broadcast going to port 31337 inbound on Fa0/0 it will be translated to the multicast address 224.1.2.3. Note that the use of the “ip forward-protocol” command is necessary in order to process switch UDP traffic going to the port in question. Without process switching the helper-map feature can not correctly categorize and translate the traffic.

R1’s configuration, the last hop router, looks as follows:

R1#
interface Serial0/0.102 point-to-point
 description TO R2
 ip address 173.20.12.1 255.255.255.0
 ip pim dense-mode
 ip multicast helper-map 224.1.2.3 173.20.18.255 100
 frame-relay interface-dlci 102
!
interface FastEthernet0/0
 description TO RECEIVER – SW2
 ip address 173.20.18.1 255.255.255.0
 ip directed-broadcast
!
ip forward-protocol udp 31337
access-list 100 permit udp any any eq 31337

This configuration means that if R1 receives a UDP multicast going to the group address 224.1.2.3 at port 31337 inbound on S0/0.102 it will be translated to the directed broadcast address 173.20.18.255. Since the link 173.20.18.0/24 is directly connected and has the directed broadcast address of 173.20.18.255 by default, the configuration implies that traffic matching the helper map on S0/0.102 will be sent as a broadcast out Fa0/0.

Note the use of the “ip forward-protocol” command as before in order to process switch the UDP traffic. Additionally the “ip directed-broadcast” command is enabled on the last hop outgoing interface since in current IOS versions this is disabled by default for security purposes.

To verify the functionality of this feature we can use the IP SLA feature in the IOS to generate broadcast UDP traffic on the sender. This configuration on SW1 is as follows:

rtr 1
 type udpEcho dest-ipaddr 255.255.255.255 dest-port 31337 source-ipaddr 173.20.47.7 source-port 12345 control disable
 timeout 0
 frequency 5
rtr schedule 1 life forever start-time now

This config means that SW1 will generate a UDP packet sourced from the address 173.20.47.7 at port 12345 going to the address 255.255.255.255 at port 31337 every 5 seconds, and will not wait for a response back. The following debug on R4, the first hop router, verifies that the packet is received and is translated into multicast.

Rack20R4#debug ip packet detail
IP packet debugging is on (detailed)
IP: s=173.20.47.7 (FastEthernet0/0), d=255.255.255.255, len 44, rcvd 2
    UDP src=12345, dst=31337
Rack20R4#undebug all
All possible debugging has been turned off

Rack20R4#debug ip mpacket
IP multicast packets debugging is on
IP(0): s=173.20.47.7 (FastEthernet0/0) d=224.1.2.3 (Serial0/0) id=0, ttl=254, prot=17, len=44(44), mforward
Rack20R4#undebug all
All possible debugging has been turned off

From the unicast “debug ip packet detail” we can see the packet is received in Fa0/0 from SW2 with the proper destination and port information. Next the multicast “debug ip mpacket” shows us that the packet has been translated to multicast address 224.1.2.3 and is forwarded out Serial0/0 towards R3.

As R4, R3, R2, and R1 receive the multicast packet the multicast routing table is populated as follows.

Rack20R4#show ip mroute 224.1.2.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.2.3), 01:24:42/stopped, RP 0.0.0.0, flags: D
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Serial0/0, Forward/Dense, 01:24:42/00:00:00

(173.20.47.7, 224.1.2.3), 00:01:27/00:02:58, flags: T
  Incoming interface: FastEthernet0/0, RPF nbr 0.0.0.0
  Outgoing interface list:
    Serial0/0, Forward/Dense, 00:01:27/00:00:00

Rack20R3#show ip mroute 224.1.2.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.2.3), 01:25:36/stopped, RP 0.0.0.0, flags: D
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Serial1/1.312, Forward/Dense, 01:25:36/00:00:00
    Serial1/0, Forward/Dense, 01:25:36/00:00:00

(173.20.47.7, 224.1.2.3), 00:02:22/00:02:54, flags: T
  Incoming interface: Serial1/0, RPF nbr 173.20.0.4
  Outgoing interface list:
    Serial1/1.312, Forward/Dense, 00:02:23/00:00:00

Rack20R2#show ip mroute 224.1.2.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.2.3), 01:25:27/stopped, RP 0.0.0.0, flags: D
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Serial0/0.213, Forward/Dense, 01:25:27/00:00:00
    Serial0/0.201, Forward/Dense, 01:25:27/00:00:00

(173.20.47.7, 224.1.2.3), 00:02:12/00:02:54, flags: T
  Incoming interface: Serial0/0.213, RPF nbr 173.20.23.3
  Outgoing interface list:
    Serial0/0.201, Forward/Dense, 00:02:13/00:00:00

Rack20R1#show ip mroute 224.1.2.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.2.3), 01:25:42/stopped, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Serial0/0.102, Forward/Dense, 01:25:42/00:00:00

(173.20.47.7, 224.1.2.3), 00:02:27/00:02:57, flags: PLTX
  Incoming interface: Serial0/0.102, RPF nbr 173.20.12.2
  Outgoing interface list: Null

Once the packet is received on R1, the last hop router, the “debug ip mpacket” shows the packet coming in as multicast, while the “debug ip packet detail” shows that the packet being converted back into a broadcast. This is also verified by the “debug ip packet” output on SW2, the receiver of the packet.

Rack20R1#debug ip mpacket
IP multicast packets debugging is on
IP(0): s=173.20.47.7 (Serial0/0.102) d=224.1.2.3 id=0, ttl=251, prot=17, len=48(44), mroute olist null
Rack20R1#undebug all
All possible debugging has been turned off

Rack20R1#debug ip packet detail
IP packet debugging is on (detailed)
IP: tableid=0, s=173.20.47.7 (Serial0/0.102), d=173.20.18.255 (FastEthernet0/0), routed via RIB
Rack20R1#undebug all
All possible debugging has been turned off

Rack20SW2#debug ip packet
IP packet debugging is on
IP: s=173.20.47.7 (Vlan18), d=255.255.255.255, len 44, rcvd 2
IP: s=173.20.47.7 (Vlan18), d=255.255.255.255, len 44, stop process pak for forus packet
Rack20SW2#undebug all
All possible debugging has been turned off

This feature can also be used in the opposite manner, where a multicast packet is received, converted to broadcast, and then converted back to multicast. In either case the configuration depends on the design and functionality of the source and destination application.

Tags: , , ,

Jan
02

Hi Brian,

I enjoy the new blog feature. Lots of valuable information condensed in a small space. Could you explain in a nutshell how to troubleshoot multicast RPF failures? I understand the concept, just figuring out what shows and/or debugs to use always seems to take me 30 minutes rather than 5 minutes.

First let’s talk briefly about what the RPF check, or Reverse Path Forwarding check, is for multicast. PIM is known as Protocol Independent Multicast routing because it does not exchange its own information about the network topology. Instead it relies on the accuracy of an underlying unicast routing protocol like OSPF or EIGRP to maintain a loop free topology. When a multicast packet is received by a router running PIM the device first looks at what the source IP is of the packet. Next the router does a unicast lookup on the source, as in the “show ip route w.x.y.z”, where “w.x.y.z” is the source. Next the outgoing interface in the unicast routing table is compared with the interface in which the multicast packet was received on. If the incoming interface of the packet and the outgoing interface for the route are the same, it is assumed that the packet is not looping, and the packet is candidate for forwarding. If the incoming interface and the outgoing interface are *not* the same, a loop-free path can not be guaranteed, and the RPF check fails. All packets for which the RPF check fails are dropped.

Now as for troubleshooting the RPF check goes there are a couple of useful show and debug commands that are available to you on the IOS. Suppose the following topology:

multicast.rpf.failure.gif

We will be running EIGRP on all interfaces, and PIM dense mode on R2, R3, R4, and SW1. Note that PIM will not be enabled on R1. R4 is a multicast source that is attempting to send a feed to the client, SW1. On SW1 we will be generating an IGMP join message to R3 by issuing the “ip igmp join” command on SW1’s interface Fa0/3. On R4 we will be generating multicast traffic with an extended ping. First let’s look at the topology with a successful transmission:

SW1#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
SW1(config)#int fa0/3
SW1(config-if)#ip igmp join 224.1.1.1
SW1(config-if)#end
SW1#

R4#ping
Protocol [ip]:
Target IP address: 224.1.1.1
Repeat count [1]: 5
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]: y
Interface [All]: Ethernet0/0
Time to live [255]:
Source address: 150.1.124.4
Type of service [0]:
Set DF bit in IP header? [no]:
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
Packet sent with a source address of 150.1.124.4

Reply to request 0 from 150.1.37.7, 32 ms
Reply to request 1 from 150.1.37.7, 28 ms
Reply to request 2 from 150.1.37.7, 28 ms
Reply to request 3 from 150.1.37.7, 28 ms
Reply to request 4 from 150.1.37.7, 28 ms

Now let’s trace the traffic flow starting at the destination and working our way back up the reverse path.

SW1#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:03:05/stopped, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/3, Forward/Dense, 00:03:05/00:00:00

(150.1.124.4, 224.1.1.1), 00:02:26/00:02:02, flags: PLTX
Incoming interface: FastEthernet0/3, RPF nbr 150.1.37.3
Outgoing interface list: Null

SW1’s RPF neighbor for (150.1.124.4,224.1.1.1) is 150.1.37.3, which means that SW1 received the packet from R3.

R3#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:03:12/stopped, RP 0.0.0.0, flags: DC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial1/3, Forward/Dense, 00:03:12/00:00:00
Serial1/2, Forward/Dense, 00:03:12/00:00:00
Ethernet0/0, Forward/Dense, 00:03:12/00:00:00

(150.1.124.4, 224.1.1.1), 00:02:33/00:01:46, flags: T
Incoming interface: Serial1/3, RPF nbr 150.1.23.2
Outgoing interface list:
Ethernet0/0, Forward/Dense, 00:02:34/00:00:00
Serial1/2, Prune/Dense, 00:02:34/00:00:28

R3’s RPF neighbor is 150.1.23.2, which means the packet came from R2.

R2#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:02:44/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Dense, 00:02:44/00:00:00
Serial0/1, Forward/Dense, 00:02:44/00:00:00

(150.1.124.4, 224.1.1.1), 00:02:44/00:01:35, flags: T
Incoming interface: FastEthernet0/0, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/1, Forward/Dense, 00:02:45/00:00:00

R2 has no RPF neighbor, meaning the source is directly connected. Now let’s compare the unicast routing table from the client back to the source.

SW1#show ip route 150.1.124.4
Routing entry for 150.1.124.0/24
Known via "eigrp 1", distance 90, metric 20540160, type internal
Redistributing via eigrp 1
Last update from 150.1.37.3 on FastEthernet0/3, 00:11:23 ago
Routing Descriptor Blocks:
* 150.1.37.3, from 150.1.37.3, 00:11:23 ago, via FastEthernet0/3
Route metric is 20540160, traffic share count is 1
Total delay is 21100 microseconds, minimum bandwidth is 128 Kbit
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 2

R3#show ip route 150.1.124.4
Routing entry for 150.1.124.0/24
Known via "eigrp 1", distance 90, metric 20514560, type internal
Redistributing via eigrp 1
Last update from 150.1.13.1 on Serial1/2, 00:11:47 ago
Routing Descriptor Blocks:
* 150.1.23.2, from 150.1.23.2, 00:11:47 ago, via Serial1/3
Route metric is 20514560, traffic share count is 1
Total delay is 20100 microseconds, minimum bandwidth is 128 Kbit
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 1
150.1.13.1, from 150.1.13.1, 00:11:47 ago, via Serial1/2
Route metric is 20514560, traffic share count is 1
Total delay is 20100 microseconds, minimum bandwidth is 128 Kbit
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 1

R2#show ip route 150.1.124.4
Routing entry for 150.1.124.0/24
Known via "connected", distance 0, metric 0 (connected, via interface)
Redistributing via eigrp 1
Routing Descriptor Blocks:
* directly connected, via FastEthernet0/0
Route metric is 0, traffic share count is 1

Based on this output we can see that SW1 sees the source reachable via R3, which was the neighbor the multicast packet came from. R3 sees the source reachable via R1 and R2 due to equal cost load-balancing, with R2 as the neighbor that the multicast packet came from. Finally R2 sees the source as directly connected, which is where the multicast packet came from. This means that the RPF check is successful as traffic is transiting the network, hence we had a successful transmission.

Now let’s modify the routing table on R3 so that the route to R4 points to R1. Since the multicast packet on R3 comes from R2 and the unicast route will going back towards R1 there will be an RPF failure, and the packet transmission will not be successful. Again note that R1 is not routing multicast in this topology.

R3#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
R3(config)#ip route 150.1.124.4 255.255.255.255 serial1/2
R3(config)#end
R3#

R4#ping
Protocol [ip]:
Target IP address: 224.1.1.1
Repeat count [1]: 5
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]: y
Interface [All]: Ethernet0/0
Time to live [255]:
Source address: 150.1.124.4
Type of service [0]:
Set DF bit in IP header? [no]:
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
Packet sent with a source address of 150.1.124.4
.....
R4#

We can now see that on R4 we do not receive a response back from the final destination… so where do we start troubleshooting? First we want to look at the first hop away from the source, which in this case is R2. On R2 we want to look in the multicast routing table to see if the incoming interface list and the outgoing interface list is correctly populated. Ideally we will see the incoming interface as FastEthernet0/0, which is directly connected to the source, and the outgoing interface as Serial0/1, which is the interface downstream facing towards R3.

R2#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:07:27/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/1, Forward/Dense, 00:07:27/00:00:00
FastEthernet0/0, Forward/Dense, 00:07:27/00:00:00

(150.1.124.4, 224.1.1.1), 00:07:27/00:01:51, flags: T
Incoming interface: FastEthernet0/0, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/1, Forward/Dense, 00:04:46/00:00:00

This is the correct output we should see on R2. Two more verifications we can do are with the “show ip mroute count” command and the “debug ip mpacket” command. “show ip mroute count” will show all currently active multicast feeds, and whether packets are getting dropped:

R2#show ip mroute count
IP Multicast Statistics
3 routes using 1864 bytes of memory
2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

Group: 224.1.1.1, Source count: 1, Packets forwarded: 4, Packets received: 4
Source: 150.1.124.4/32, Forwarding: 4/1/100/0, Other: 4/0/0

Group: 224.0.1.40, Source count: 0, Packets forwarded: 0, Packets received: 0

“debug ip mpacket” will show the packet trace in real time, similar to the “debug ip packet” command for unicast packets. One caveat of using this verification is that only process switched traffic can be debugged. This means that we need to disable fast or CEF switching of multicast traffic by issuing the “no ip mroute-cache” command on the interfaces running PIM. Once this debug is enabled we’ll generate traffic from R4 again and we should see the packets correctly routed through R2.

R4#ping 224.1.1.1 repeat 100

Type escape sequence to abort.
Sending 100, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
…

R2(config)#int f0/0
R2(config-if)#no ip mroute-cache
R2(config-if)#int s0/1
R2(config-if)#no ip mroute-cache
R2(config-if)#end
R2#debug ip mpacket
IP multicast packets debugging is on
R2#
IP(0): s=150.1.124.4 (FastEthernet0/0) d=224.1.1.1 (Serial0/1) id=231, prot=1, len=100(100), mforward
IP(0): s=150.1.124.4 (FastEthernet0/0) d=224.1.1.1 (Serial0/1) id=232, prot=1, len=100(100), mforward
IP(0): s=150.1.124.4 (FastEthernet0/0) d=224.1.1.1 (Serial0/1) id=233, prot=1, len=100(100), mforward
IP(0): s=150.1.124.4 (FastEthernet0/0) d=224.1.1.1 (Serial0/1) id=234, prot=1, len=100(100), mforward
IP(0): s=150.1.124.4 (FastEthernet0/0) d=224.1.1.1 (Serial0/1) id=235, prot=1, len=100(100), mforward
R2#undebug all

Now that we see that R2 is correctly routing the packets let’s look at all three of these verifications on R3.

R3#show ip mroute 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:00:01/stopped, RP 0.0.0.0, flags: DC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial1/3, Forward/Dense, 00:00:01/00:00:00
Ethernet0/0, Forward/Dense, 00:00:01/00:00:00

(150.1.124.4, 224.1.1.1), 00:00:01/00:02:58, flags:
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Ethernet0/0, Forward/Dense, 00:00:02/00:00:00
Serial1/3, Forward/Dense, 00:00:02/00:00:00

From R3’s show ip mroute output we can see that the incoming interface is listed as Null. This is an indication that for some reason R3 is not correctly routing the packets, and is instead dropping them as they are received. For more information let’s look at the “show ip mroute count” output.

R3#show ip mroute count
IP Multicast Statistics
3 routes using 2174 bytes of memory
2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts(neg(-) = Drops) per second/Avg Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

Group: 224.1.1.1, Source count: 1, Packets forwarded: 0, Packets received: 15
Source: 150.1.124.4/32, Forwarding: 0/0/0/0, Other: 15/15/0

Group: 224.0.1.40, Source count: 0, Packets forwarded: 0, Packets received: 0

From this output we can see that packets for (150.1.124.4,224.1.1.1) are getting dropped, and specifically the reason they are getting dropped is because of RPF failure. This is seen from the “Other: 15/15/0” output, where the second field is RPF failed drops. For more detail let’s look at the packet trace.

R3#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
R3(config)#int e0/0
R3(config-if)#no ip mroute-cache
R3(config-if)#int s1/3
R3(config-if)#no ip mroute-cache
R3(config-if)#end
R3#debug ip mpacket
IP multicast packets debugging is on
IP(0): s=150.1.124.4 (Serial1/3) d=224.1.1.1 id=309, ttl=253, prot=1, len=104(100), not RPF interface
IP(0): s=150.1.124.4 (Serial1/3) d=224.1.1.1 id=310, ttl=253, prot=1, len=104(100), not RPF interface
IP(0): s=150.1.124.4 (Serial1/3) d=224.1.1.1 id=311, ttl=253, prot=1, len=104(100), not RPF interface
IP(0): s=150.1.124.4 (Serial1/3) d=224.1.1.1 id=312, ttl=253, prot=1, len=104(100), not RPF interface
R3#undebug all
All possible debugging has been turned off

From this output we can clearly see that an RPF failure is occurring on R3. The reason why is that multicast packets are being received in Serial1/3, while the unicast route is pointing out Serial1/2. Now that we see the problem occurring there are a few different ways we can solve it.

First we can modify the unicast routing domain so that it conforms to the multicast routing domain. In our particular case this would be accomplished by removing the static route we configured on R3, or configuring a new more-preferred static route on R3 that points to R2 for 150.1.124.4.

Secondly we could modify the multicast domain in order to override the RPF check. The simplest way to do this is with a static multicast route using the “ip mroute” command. Another way of doing this dynamically would be to configure Multicast BGP, which for the purposes of this example we will exclude due to its greater complexity.

The “ip mroute” statement is not like the regular “ip route” statement in the manner that it does not affect the actual traffic flow through the network. Instead if affects what interfaces the router will accept multicast packets in. By configuring the statement “ip mroute 150.1.124.4 255.255.255.255 150.1.23.2” on R3 it will tell the router that if a multicast packet is received from the source 150.1.124.4 it is okay that it be received on the interface in which the neighbor 150.1.23.2 exists. In our particular case this means that even though the unicast route points out Serial1/2, it’s okay that the multicast packet be received in Serial1/3. Let’s look at the effect of this:

R3#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
R3(config)#ip mroute 150.1.124.4 255.255.255.255 150.1.23.2
R3(config)#end
R3#

R4#ping
Protocol [ip]:
Target IP address: 224.1.1.1
Repeat count [1]: 5
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]: y
Interface [All]: Ethernet0/0
Time to live [255]:
Source address: 150.1.124.4
Type of service [0]:
Set DF bit in IP header? [no]:
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
Packet sent with a source address of 150.1.124.4

Reply to request 0 from 150.1.37.7, 32 ms
Reply to request 1 from 150.1.37.7, 32 ms
Reply to request 2 from 150.1.37.7, 32 ms
Reply to request 3 from 150.1.37.7, 32 ms
Reply to request 4 from 150.1.37.7, 32 ms
R4#

Note that the above solution of using the multicast static route does not affect the return path of unicast traffic from SW1 to R4. Therefore this solution allows for different traffic patterns for the unicast traffic and the multicast traffic.

Tags: , , ,

Categories

CCIE Bloggers