Task 6.3 R7 failed to ping 224.8.8.8

Hi

Anybody got this working?

I noticed with

"neighbor 131.1.2.2 route-map SET_NEXT_HOP out

neighbor 131.1.4.4 route-map SET_NEXT_HOP out" on R3 [TASK 5.3], R4 and R2 are not pointing tunnel0 (MDT tunnel) as incoming interface like below

Rack1R4#sh ip mroute vrf 65001
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.8.8.8), 00:20:55/00:02:35, RP 10.1.48.4, flags: SJC
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Ethernet0/1, Forward/Sparse, 00:20:55/00:02:35

(10.1.27.2, 224.8.8.8), 00:02:39/00:03:27, flags:
  Incoming interface: Null, RPF nbr 131.1.3.3
  Outgoing interface list:
    Ethernet0/1, Forward/Sparse, 00:02:39/00:02:50

(*, 224.0.1.40), 00:29:50/00:03:01, RP 10.1.48.4, flags: SJCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Ethernet0/1, Forward/Sparse, 00:29:50/00:03:01

Rack1R4#

 

Without NEXT_HOP command it is showing incoming interface as tunnel0. Is this the root cause or I am off the track?

Rack1R4#sh ip mroute vrf 65001 224.8.8.8
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.8.8.8), 00:04:27/00:03:02, RP 10.1.48.4, flags: SJC
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Ethernet0/1, Forward/Sparse, 00:04:25/00:03:02

(10.1.7.7, 224.8.8.8), 00:00:54/00:03:25, flags: MT
  Incoming interface: Tunnel0, RPF nbr 131.1.2.2
  Outgoing interface list:
    Ethernet0/1, Forward/Sparse, 00:00:54/00:03:09

(10.1.27.2, 224.8.8.8), 00:03:22/00:02:34, flags: M
  Incoming interface: Tunnel0, RPF nbr 131.1.2.2
  Outgoing interface list:
    Ethernet0/1, Forward/Sparse, 00:03:22/00:03:09

(10.1.27.7, 224.8.8.8), 00:00:54/00:03:25, flags: MT
  Incoming interface: Tunnel0, RPF nbr 131.1.2.2
  Outgoing interface list:
    Ethernet0/1, Forward/Sparse, 00:00:54/00:03:09

Rack1R4#

Comments

  • Yes, you can remove SET_NEXT_HOP route-map on R3, and instead redistribute static /32 route to 240.12.X.6 into IGP, effectively generating LDP label for the original eBGP next-hop. 


    ip route 204.12.1.6 255.255.255.255 Ethernet0/1
    !
    router isis
    redistribute connected route-map CONNECTED_TO_ISIS level-1-2
    !
    ip prefix-list HOST_ROUTE_TO_R6 seq 5 permit 204.12.1.6/32
    !
    route-map CONNECTED_TO_ISIS permit 10
    match ip address prefix-list HOST_ROUTE_TO_R6

    Still for 6.3 to work i belive you would need static mroute fixup for the next-hop address 131.X.2.2 something like:

    ip mroute 131.1.2.2 255.255.255.255 Serial 1/3 

  • Hi Petr, I am having trouble with this task. VRF 65001 on R2 sees PIM neighborship with R7 and R4 via tunnel0. VRF 65001 on R4 sees PIM neighborship with R8 and R2 via tunnel0. MSDP is up between R2 and R4.

    Most of time, I get a reply from the first ping but then every ping after that fails. Any thoughts?

  • I could not get this to work either.  MSDP is up on the VRF 65001.  R4 sees R7 as an active source in it's MSDP sa-cache.  PIM neighbors are up on the MDT tunnel interfaces between R2 and R4.  For some reason, when R8 joins the 224.8.8.8 group, R2 never gets notified.  There is no mroute entry for it in R2's VRF mroute table.  I tried shutting down the link between R3 and R1 to see if it was an RPF failure (debugs on all devices showed none after my static mroutes, but I thought I would try it anyway) and no luck.

    I know this wasn't part of the lab exercise, but I did get it to work when I took down MSDP and had R8 and R4 point to R2's VRF interface as the RP.

     

    Again, I think the problem was that R4 was never notifying R2 about the new IGMP request for 224.8.8.8 from R8.  Perhaps a bug in my IOS.....  unfortunately, that would not surprise me....

  • Hi,

     

    Exact same situation for me

    MSDP sa-cache on R4 looks fine:

    SPRack1R4#sh ip msdp vrf 65001 sa-cache 


    MSDP Source-Active Cache - 1 entries

    (10.1.27.7, 224.8.8.8), RP 10.1.27.2, BGP/AS 65001, 00:15:04/00:02:57, Peer 10.1.27.2

     

    However R2 has Null OIL for the group 224.8.8.8:

     


    PRack1R2#sh ip mroute vrf 65001

    IP Multicast Routing Table

    Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,

           L - Local, P - Pruned, R - RP-bit set, F - Register flag,

           T - SPT-bit set, J - Join SPT, M - MSDP created entry,

           X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,

           U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel

           Y - Joined MDT-data group, y - Sending to MDT-data group

    Outgoing interface flags: H - Hardware switched, A - Assert winner

     Timers: Uptime/Expires

     Interface state: Interface, Next-Hop or VCD, State/Mode

     

    (*, 224.8.8.8), 00:00:08/stopped, RP 10.1.27.2, flags: SP

      Incoming interface: Null, RPF nbr 0.0.0.0

      Outgoing interface list: Null

     

    (10.1.27.7, 224.8.8.8), 00:00:08/00:02:59, flags: PTA

      Incoming interface: FastEthernet0/0, RPF nbr 0.0.0.0

      Outgoing interface list: Null

     

    (*, 224.0.1.40), 00:22:45/00:02:08, RP 10.1.27.2, flags: SJCL

      Incoming interface: Null, RPF nbr 0.0.0.0

      Outgoing interface list:

        FastEthernet0/0, Forward/Sparse, 00:22:44/00:02:08

     

    If I wait a while and try the ping from R7 the first one succeeds - every other one fails then until I wait again. As you say R2 never gets notified about R8s join even tho R4 and R2 are both the DR on their respective networks.

    This takes me back to the bad old days of trying to get multicast working under weird scenarious for R&S CCIE - I think IOS Multicast is just flaky/broken in these cases.

  • I got it working between R7 and R8, but R2 is only able to ping 224.8.8.8 once. For some reason even if I say

    "ping vrf 65001 ip 224.8.8.8 repe 10 so fa1/0" it still seems to use it's loopback interface, since when R4 receives the packet it is originated from 131.1.2.2 and getting dropped due to the RPF check (next-hop is not tunnel0). I've been experiencing problems between 3600s and 7200s in MVPN earlier also.

  • I have searced on some old forum archives  on  site  for  this and worked with the extended ping solution (first modify the route-map on R3 for not setting the next-hop for 10.1.27.0/24)

    Rack1R2#ping vrf 65001
    Protocol [ip]:
    Target IP address: 224.8.8.8
    Repeat count [1]: 1000
    Datagram size [100]:
    Timeout in seconds [2]: 1
    Extended commands [n]: y
    Interface [All]: tunnel0
    Time to live [255]:
    Source address: 10.1.27.2
    Type of service [0]:
    Set DF bit in IP header? [no]:
    Validate reply data? [no]:
    Data pattern [0xABCD]:
    Loose, Strict, Record, Timestamp, Verbose[none]: v
    Loose, Strict, Record, Timestamp, Verbose[V]:
    Sweep range of sizes [n]:
    Type escape sequence to abort.
    Sending 1000, 100-byte ICMP Echos to 224.8.8.8, timeout is 1 seconds:
    Packet sent with a source address of 10.1.27.2
    Reply to request 0 from 10.1.48.8, 96 ms
    Reply to request 1 from 10.1.48.8, 68 ms
    Reply to request 2 from 10.1.48.8, 16 ms
    Reply to request 3 from 10.1.48.8, 56 ms
    Reply to request 4 from 10.1.48.8, 60 ms
    Reply to request 5 from 10.1.48.8, 8 ms
    Reply to request 6 from 10.1.48.8, 16 ms
    Reply to request 7 from 10.1.48.8, 56 ms
    Reply to request 8 from 10.1.48.8, 52 ms
    Reply to request 9 from 10.1.48.8, 20 ms
    Reply to request 10 from 10.1.48.8, 12 ms

     

    Rack1R8#
    *Mar  1 00:46:21.491: ICMP: echo reply sent, src 10.1.48.8, dst 10.1.27.2
    *Mar  1 00:46:22.443: ICMP: echo reply sent, src 10.1.48.8, dst 10.1.27.2
    Rack1R8#
    *Mar  1 00:46:23.463: ICMP: echo reply sent, src 10.1.48.8, dst 10.1.27.2
    Rack1R8#
    *Mar  1 00:46:24.471: ICMP: echo reply sent, src 10.1.48.8, dst 10.1.27.2
    *Mar  1 00:46:25.467: ICMP: echo reply sent, src 10.1.48.8, dst 10.1.27.2
    Rack1R8#
    *Mar  1 00:46:26.487: ICMP: echo reply sent, src 10.1.48.8, dst 10.1.27.2
    *Mar  1 00:46:27.463: ICMP: echo reply sent, src 10.1.48.8, dst 10.1.27.2

    BR

    Orestis

Sign In or Register to comment.