6.2. Multicast Testing

SG shows successful ping from R6 to Group. didnt work for me...and historically for others...  one gentleman gives nice solution from below link at old forum "ip multicast multipath"

i would be nervous to use that solution...but great idea for real world. 

http://forum.internetworkexpert.com/ubbthreads.php/ubb/showflat/Number/22909/page/2#Post22909

my fix to get succesful pings was to make R3 to prefer path to vlan26 and loopback  over the ospf (with distance command). wierd that R3's routing decision breaks R6 pinging ablility....

 

Comments

  • I am running into the same issue.

     

    R1#

    Routing entry for 132.1.6.0/24
      Known via "ospf 1", distance 110, metric 20, type extern 2, forward metric 64
      Last update from 132.1.0.2 on Serial0/0, 00:00:34 ago
      Routing Descriptor Blocks:
      * 132.1.0.2, from 150.1.2.2, 00:00:34 ago, via Serial0/0
          Route metric is 20, traffic share count is 1

     

  • It didn't work with me either..

     

    on R2, it doesn't know of any outgoing hosts!

    (132.2.6.6, 228.28.28.28), 00:03:30/00:02:29, flags: P
      Incoming interface: FastEthernet0/1, RPF nbr 132.2.26.6
      Outgoing interface list: Null

     

    I used multicast multipath but didn't work

    I used interface command "ip pim nbma-mode" and didn't work..

     

     

  • I faced the same issue. I turned deb ip pim on R3 and was able to see that R1 send igmp-joins toward RP's address of 150.1.2.2 to R3, not to R2 as it should be. Then I tweaked EIGRP->OSPF redistribution process on R2 in such a way, that R1 begin to prefer R2 to reach 150.1.2.2. Now I have:

    Rack1R2(config)#do sh ip mro 228.28.28.28

    (*, 228.28.28.28), 04:33:06/00:02:56, RP 150.1.2.2, flags: S
    Incoming interface: Null, RPF nbr 0.0.0.0
    Outgoing interface list:
    Serial0/0/0, 132.1.0.1, Forward/Sparse, 04:28:27/00:02:56

    Rack1R6(config-ext-nacl)#do p 228.28.28.28 so f0/0.6

    Type escape sequence to abort.
    Sending 1, 100-byte ICMP Echos to 228.28.28.28, timeout is 2 seconds:
    Packet sent with a source address of 132.1.6.6

    Reply to request 0 from 132.1.17.7, 52 ms
    Reply to request 0 from 132.1.17.7, 64 ms
    Regards.
  • Maaaan.. you are my hero!! [Y]

     

    I already noticed this beharior (sending to R3 instead of R2), but didn't invistigate much.

     

    So, one way is fixing the EIGRP->OSPF redistribution

    another way is enabling PIM on the HDLC link between R2 or R3

     

    The problem seems caused because according to unicast routing table, 150.1.2.2 is reachable via HDLC link, BUT PIM is not enabled on that link.. Never thought about this one :-) Thanks to you NTllect.


    Can you verify that we are in sync?

     

    You solved one of my multicast mysteries!

  • mahmoud,



    I think enabling PIM on any other interface than specified will cost you some points, such approach should be considered as only option, not solution. It should be noticed too, that I solved redistribution part in a different way. We are in sync in that R3 knows 150.1.2.0 via EIGRP(90) on HDLC, since R2 naturally inserted it via network statement under EIGRP.
  • Riddle me this....

    After you've confirmed that R1 is learning 150.1.2.2 and 132.1.6.0 via R2, and that R3 is learning 150.1.2.2 via 132.1.23.2... Set up a real long ping repeat on R6 to 228.28.28.28. Then wait about 3 minutes, 16 seconds. The pings from R6 go bye-bye ; the group gets pruned.

     

    How about just adding ip mroute 132.1.26.6 255.255.255.255 132.1.0.2 to R1?

  • Hi All,

    That's exactly what I did and I chose 'ip mroute' on R1 over manipulating redistribution on R2/R3 as it was complexed enough.

    Any objections? I didn't find any.

    Thanks,
    M

     

  • Bam and miroslaw: you guys sure this worked for you? I'm waiting to hear from you.

     

    As far as I remember, in this scenario, R6 is the multicast source, and SW1 is listening to that group. So basically SW1 sends IGMP join to R1, and R1 should tell R2's loopback IP (the RP) that it has a host wanting to hear that group's packets via PIM join messages.

    But problem is that R1 is not sending PIM Join to R2's loopback address directly via R2 itself, but R3 instead. And what makes this complicated is that R3 reaches 132.1.0.2 (R2's loopback) via its HDLC link (as prefered by routing table) which doesn't have PIM enabled on it in the 1st place. So how can it continue the propegation of PIM Join message ot R2?

    If you go to R2 in this case, and execute "sh ip mroute", the result will not show any outgoing interfaces for the desired multicast group. Because? R3 didn't propegate R1's PIM Join to R2 as R3's routing table has no path toward R2's loopback that is PIM enabled. The only path in R3's routing table is the HDLC link which doesn't have PIM. The FR link has PIM enabled but R3 doesn't know that it can read R2's loopback over it.

    So solution is either enabling PIM on HDLC link (lose points in CCIE lab), or (as NTllect said) tweaking redistribution process to make the R1 reach R2's loopback directly via R2 instead of R3. Another solution might be, that I didn't try, tweaking R3's routing table to reach R2's loopback via the FR link where PIM is enabled (instead of HDLC link - no PIM)

    "ip mroute" is a static RPF entry that solves issues related to received mpackets on an interface that failed their RPF test. So as far as I know, "ip mroute" won't solve this issue. But if it did work for you, then kindly tell me more details about it, I'm in disperate need of your wisdom!!

     


  • You are right. I was using a differnet redistribution scheme where I redistributed 132.1.23.0/24 into EIGRP, making it an external route. This solves alot of problems -- specifically, the reachability problem when the backup link is activated. But it changes the nature of the PIM problem.

    If you use the one from the solution guide then you do need to tweak the IGP to make certain that R1 prefers R2 as the next hop to 132.1.6.6.

    R1#q 132.1.6.6
    Routing entry for 132.1.6.0/24
      Known via "ospf 1", distance 110, metric 20, type extern 2, forward metric 64
      Last update from 132.1.0.3 on Serial0/0, 00:30:50 ago
      Routing Descriptor Blocks:
      * 132.1.0.3, from 150.1.3.3, 00:30:50 ago, via Serial0/0
          Route metric is 20, traffic share count is 1
        132.1.0.2, from 150.1.2.2, 00:30:50 ago, via Serial0/0
          Route metric is 20, traffic share count is 1

    The simplest approach is to increase the metric for redistributed routes on R3. This sort of 'indirect tuning' is a common theme seen throughout IE's guides. By indirect tuning, I mean changing parms where you don't want something to occur. In this case, tuning R3 as opposed to tuning the metric for 132.1.6.0 on R2.

    redistribute eigrp 10 metric 30 subnets

    The result....

    R1#q 132.1.6.6
    Routing entry for 132.1.6.0/24
      Known via "ospf 1", distance 110, metric 20, type extern 2, forward metric 64
      Last update from 132.1.0.2 on Serial0/0, 00:00:08 ago
      Routing Descriptor Blocks:
      * 132.1.0.2, from 150.1.2.2, 00:00:08 ago, via Serial0/0
          Route metric is 20, traffic share count is 1

    HTH!

  • Mahmoud,

     

    I must warn you that it weren't my Multicast
    SuperHero skills that led me to this solution, I either had a choice of
    changing a way how the 132.1.6.0/24 network was redistributed into
    OSPF  or mess around Multicast and just started playing with Multicast.

     

    I started sending pings from R6 Vlan6 to 228.28.28.28

     

    'debug ip mpacket' and 'show ip mroute' on R2 shown:

     

     

    *Mar  1 00:29:46.927: IP(0): s=132.1.6.6
    (FastEthernet0/0) d=228.28.28.28 id=86, ttl=254, prot=1, len=114(100), mroute
    olist null

     

     

    (*,
    228.28.28.28), 00:00:31/stopped, RP 150.1.2.2, flags: SP

      Incoming interface: Null, RPF nbr 0.0.0.0

      Outgoing interface list: Null

     

    (132.1.6.6,
    228.28.28.28), 00:00:31/00:02:28, flags: PT

      Incoming interface: FastEthernet0/0, RPF nbr
    132.1.26.6

      Outgoing interface list: Null

     

     

     

    While on R1 'show ip mroute 228.28.28.28' followed by 'show
    ip rpf 132.1.6.0 was as follows:

     

    (*,
    228.28.28.28), 00:01:19/00:03:28, RP 150.1.2.2, flags: S

      Incoming interface: Serial1/0, RPF nbr
    132.1.0.3

      Outgoing interface list:

        FastEthernet0/0, Forward/Sparse,
    00:01:00/00:03:28

     

    (132.1.6.6,
    228.28.28.28), 00:00:59/00:02:30, flags:

      Incoming interface: Serial1/0, RPF nbr 132.1.0.3

      Outgoing interface list:

        FastEthernet0/0, Forward/Sparse,
    00:00:59/00:03:29

     

    (132.1.26.6,
    228.28.28.28), 00:01:19/00:02:10, flags:

      Incoming interface: Serial1/0, RPF nbr
    132.1.0.3

      Outgoing interface list:

        FastEthernet0/0, Forward/Sparse,
    00:01:19/00:03:28

     

    Rack1R1#sh ip
    rpf 132.1.6.0

    RPF information
    for ? (132.1.6.0)

      RPF interface: Serial1/0

      RPF
    neighbor: ? (132.1.0.3)

      RPF route/mask: 132.1.6.0/24

      RPF type: unicast (ospf 1)

      RPF recursion count: 0

      Doing distance-preferred lookups across
    tables

     

     

    I then did the following:

     

    Rack1R1#conf t

    Enter
    configuration commands, one per line. 
    End with CNTL/Z.

    Rack1R1(config)#ip mroute 132.1.6.6 255.255.255.255 132.1.0.2

    Rack1R1(config)#end

    *Mar  1 00:31:41.199: %SYS-5-CONFIG_I: Configured
    from console by console

    Rack1R1#sh ip
    rpf 132.1.6.0

    RPF information
    for ? (132.1.6.0)

      RPF interface: Serial1/0

      RPF
    neighbor: ? (132.1.0.2)

      RPF route/mask: 0.0.0.0/0

      RPF type: static

      RPF recursion count: 0

      Doing distance-preferred lookups across
    tables

     

     

    I agree with previous posts about what the 'ip mroute' does and I
    only did that task this way as I feel more confident with
    redistribution (which was another option for this task) rather than
    with Multicast at the moment and just wanted to experiment and because
    it was working I wanted to share it as just another possible way of
    meeting the requirement.

    HTH

    M

     

Sign In or Register to comment.