multicast ping reliability

Hi All,

It s now several times that I run into a strange issue with multicast ping, I am wondering if anyone has witnessed it too and may have an explanation.

When doing a simple multicast ping "ping 224.6.6.6 re 10" (with "ip igmp join-group 224.6.6.6" configured on a receiver), an answer is received for the first ping, then all the subsequent ones timeout. However, simply entering "no ip mroute-cache" on the interfaces of the source direct neighbor (downstream neighbor) fixes the issue.

Why would disabling fast-switching/cef for multicast traffic solve the issue ? And what is the issue in the first place since everything looks like it should work anyway ??

 

Thanks

Comments

  • Are you using GNS3 / Dynamips or real hardware?



    On Tuesday, 28 August 2012 at 17:16, netgeek wrote:

    > Hi All,
    > It s now several times that I run into a strange issue with multicast ping, I am wondering if anyone has witnessed it too and may have an explanation.
    > When doing a simple multicast ping "ping 224.6.6.6 re 10" (with "ip igmp join-group 224.6.6.6" configured on a receiver), an answer is received for the first ping, then all the subsequent ones timeout. However, simply entering "no ip mroute-cache" on the interfaces of the source direct neighbor (downstream neighbor) fixes the issue.
    > Why would disabling fast-switching/cef for multicast traffic solve the issue ? And what is the issue in the first place since everything looks like it should work anyway ??
    >
    > Thanks
    >
    >
    >
    > INE - The Industry Leader in CCIE Preparation
    > http://www.INE.com
    >
    > Subscription information may be found at:
    > http://www.ieoc.com/forums/ForumSubscriptions.aspx
    >
  • See 5th post here:

    http://ieoc.com/forums/p/21563/170771.aspx

     

     

    If not using dynamips, go through the registration process explained in the 5th post by Petr here:

    http://ieoc.com/forums/p/14965/129714.aspx

  • Hi ajCCIE2b, Hi David,

    Thank you very much for your replies.

    In this case, I am referring to real hardware, be it INE rack rental, or equipments in the labs at work with all sort of hardware and IOS version.

    And I also encountered the problem with dense-mode, where there is no registration process, so cant be that either...

     

    Cheers

  • If you take the case where you are using sparse mode with an RP. The fact that a ping got through indicates at the mroute via the RP was successful. Try  getting the router nearest to he receiver to not switch to the shortest path. Use the command spt threshold infinity.





    In any case, look at he mroute from the receiver to the source before and after you enter the command no ip mroute-cache. Do this for both Dense and Sparse modes

  • Same issue on real hardware. The topology is as follows

    R6 - multicast source

    SW3 - multicast receiver

     

    R6-----R1------R2 and R4 via hub and spoke frame relay

    R2---SW3

    R4---SW3   --All 3 are on the same ethernet subnet

     

    R1's frame relay link is the RP

    The first ping goes through fine then nothing, unless you disable mroute-cache on either R1's serial or ethernet interface, then they all go through untill it is again enabled on both.

    Any thoughts ?

     

  • If you put ip pim spt-threshold infinity on SW3 do you get the same behavior?

  • I went to try your suggestion today, but upon booting my equipment back up, everything worked; without the setting the threshold to infinity and with mroute-cache on. Guess it was just one of those problems that’s solved by a reboot. Since when did Cisco and Microsoft troubleshooting start following the same logic train lol.

Sign In or Register to comment.