Mullticast over Frame-relay

 

Topology :

                                  R5----Vlan 58------SW2 (226.1.1.1)
                                   |
                                   |
                                   |
                      --------------------
                     |                        |   
                     |                        |
                     |                        |
                    R3                 R2    

R5 is a HUB and R3 and R2 spoke.  The group 226.1.1.1 is join at vlan 58 (SW2). 

I am using auto rp in this topology and evey router configured with " ip pim autorp listener" for 224.0.1.39 and 224.0.1.40 group. The loopback 0 has configured for auto rp annouce on R5:

ip pim send-rp-announce Loopback0 scope 255
ip pim send-rp-discovery Loopback0 scope 255

The " ip pim nbma-mode" has configured on R5 S1/0 interface

Also all interface has configured with " ip pim sparse-mode" , there is no dense mode for this topology 

I can ping 226.1.1.1 from any spoke however it seems there is issue when I debug the traffic on router 5. 

I have configured the access-list for pim : "access-list 100 permit 2 any any" and debug only pim traffic. below is debug output from R5

Rack1R5#
IP: s=150.1.5.5 (local), d=224.0.1.40 (Loopback0), len 28, sending broad/multica
st, proto=2
IP: s=150.1.5.5 (Loopback0), d=224.0.1.40, len 28, rcvd 0, proto=2
IP: s=155.1.58.8 (Ethernet0/0), d=224.0.1.40, len 28, rcvd 0, proto=2
Rack1R5#
IP: s=155.1.0.3 (Serial1/0), d=224.0.0.1, len 28, rcvd 0, proto=2
Rack1R5#
IP: s=155.1.58.8 (Ethernet0/0), d=226.1.1.1, len 28, rcvd 0, proto=2
Rack1R5#
IP: s=155.1.0.5 (local), d=224.0.1.39 (Serial1/0), len 28, sending broad/multica
st, proto=2
IP: s=155.1.0.5 (local), d=224.0.1.39 (Serial1/0), len 28, encapsulation failed,
 proto=2

Rack1R5#un all

The encapsulation is failed for 224.0.1.39. now if I check rp mapping on R3 or R2 (spoke) then I see:                                                           

 Rack1R3#show ip pim rp map
Rack1R3#show ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
  RP 150.1.5.5 (?), v2v1
    Info source: 150.1.5.5 (?), elected via Auto-RP
         Uptime: 00:52:28, expires: 00:02:10
Rack1R3#
Rack1R3#
Rack1R3#
Rack1R3#
Rack1R3#
Rack1R3#
Rack1R3#show ip pim rp

Note : there is no group (226.1.1.1) show at R3 ,

 On SW2 the same command show following output :

Rack1SW2#show ip pim rp ma
Rack1SW2#show ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
  RP 150.1.5.5 (?), v2v1
    Info source: 150.1.5.5 (?), elected via Auto-RP
         Uptime: 00:54:59, expires: 00:02:40
Rack1SW2#
Rack1SW2#
Rack1SW2#
Rack1SW2#show ip pim rp
Group: 226.1.1.1, RP: 150.1.5.5, v2, v1, uptime 00:55:03, expires 00:02:36
Rack1SW2#

Now if you see the debug on R5 you will find no encapsulation failure for E0/0 interface which is connected to SW2. So this is main reason why SW2 has group mapping for 226.1.1.1 but not on R3 or R2 router why?

Rack1R5#
IP: s=155.1.58.5 (local), d=224.0.0.1 (Ethernet0/0), len 28, sending broad/multi
cast, proto=2
IP: s=150.1.5.5 (local), d=224.0.0.1 (Loopback0), len 28, sending broad/multicas
t, proto=2
IP: s=150.1.5.5 (Loopback0), d=224.0.0.1, len 28, rcvd 0, proto=2
Rack1R5#
IP: s=155.1.58.5 (local), d=224.0.1.39 (Ethernet0/0), len 28, sending broad/mult
icast, proto=2
IP: s=150.1.5.5 (local), d=224.0.1.39 (Loopback0), len 28, sending broad/multica
st, proto=2 

 Can anyone of you please tell me why encapsulation faliure happen on Frame-relay interface?

Regards

Amit Chopra

 

                                                          

Comments

  • have you used "ip pim nbma-mode" -command on your MA serial interface on R5 ?

  • on the frame-relay interface this problem is coming because there is no frame-relay map configured on the interface for 224.0.0.39. and it is not a problem unless u r using more then 1 rp in ur topology...

    the same address is been forworded on the fa interface cos u don't need any mapping as it a broadcast media type...

    Peace

  • on the frame-relay interface this problem is
    coming because there is no frame-relay map configured on the interface
    for 224.0.0.39. and it is not a problem unless u r using more then 1 rp
    in ur topology...

    One way to fix this is to manually join spokes to groups 224.0.1.39 and 224.0.1.40(not 224.0.0.x) by "ip pim rp-address x.x.x.x ACL-number" so that they are able to receive discovery messages.

  • Hi Jent

    I have configured the manual rp on R3 as per your suggestion but it is not working, Below is the configuration:

    Rack1R3#show access-lists
    Standard IP access list 1
        10 permit 224.0.1.39
        20 permit 224.0.1.40
        30 permit 226.1.1.1

    ip pim rp-address 150.1.5.5 1

    Rack1R3#
    Rack1R3#show ip pim rp

    Rack1R3#

  • Rack1R3#show ip pim rp

    Rack1R3#

    Are you sure you have enabled pim on the interface level? Is 150.1.5.5 reachable through unicast routing?

    Also make sure that you have the frame-relay map configured as it is not a point-to-point interface. Remember to include "broadcast" keyword in the end. Spokes don't need the mapping since they propably are running circuits as point-to-point subinterfaces?

  • Hi Jent - Did that but same issue.

    On R5 config:

    interface Serial1/0
     ip address 155.1.0.5 255.255.255.0
     ip pim nbma-mode
     ip pim sparse-dense-mode
     encapsulation frame-relay
     no ip mroute-cache
     serial restart-delay 0
     no dce-terminal-timing-enable
     frame-relay map ip 224.0.1.39 503 broadcast
     frame-relay map ip 224.0.1.40 503 broadcast
     frame-relay map ip 155.1.0.1 501 broadcast
     frame-relay map ip 155.1.0.2 502 broadcast
     frame-relay map ip 155.1.0.3 503 broadcast
     frame-relay map ip 155.1.0.4 504 broadcast
     frame-relay map ip 155.1.0.5 503 broadcast
     no frame-relay inverse-arp

    interface Loopback0
     ip address 150.1.5.5 255.255.255.0
     ip pim sparse-mode
     ip ospf network point-to-point
    end

    Ip pim auto listener command configured in both R3 and R5

    On R3

    Rack1R3#show ip route 150.1.5.5
    Routing entry for 150.1.5.0/24
      Known via "ospf 1", distance 110, metric 65, type intra area
      Last update from 155.1.0.5 on Serial1/0, 16:54:25 ago
      Routing Descriptor Blocks:
      * 155.1.0.5, from 150.1.5.5, 16:54:25 ago, via Serial1/0
          Route metric is 65, traffic share count is 1

    So R3 can reach L0 of R5 on frame-relay interface

    Rack1R3#show ip rpf 150.1.5.5
    RPF information for ? (150.1.5.5)
      RPF interface: Serial1/0
      RPF neighbor: ? (155.1.0.5)
      RPF route/mask: 150.1.5.0/24
      RPF type: unicast (ospf 1)
      RPF recursion count: 0
      Doing distance-preferred lookups across tables

    Rack1R3#show ip mroute
    IP Multicast Routing Table
    Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
           L - Local, P - Pruned, R - RP-bit set, F - Register flag,
           T - SPT-bit set, J - Join SPT, M - MSDP created entry,
           X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
           U - URD, I - Received Source Specific Host Report,
           Z - Multicast Tunnel, z - MDT-data group sender,
           Y - Joined MDT-data group, y - Sending to MDT-data group
    Outgoing interface flags: H - Hardware switched, A - Assert winner
     Timers: Uptime/Expires
     Interface state: Interface, Next-Hop or VCD, State/Mode

    (*, 224.0.1.39), 21:48:51/stopped, RP 0.0.0.0, flags: DC
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list:
        Serial1/0, Forward/Sparse-Dense, 21:48:51/00:00:00

    (150.1.5.5, 224.0.1.39), 00:02:51/00:00:09, flags: PT
      Incoming interface: Serial1/0, RPF nbr 155.1.0.5
      Outgoing interface list: Null

    (*, 224.0.1.40), 21:49:00/stopped, RP 0.0.0.0, flags: DCL
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list:
        Serial1/0, Forward/Sparse-Dense, 21:49:00/00:00:00

    (150.1.5.5, 224.0.1.40), 21:48:54/00:02:43, flags: PLT
      Incoming interface: Serial1/0, RPF nbr 155.1.0.5
     Outgoing interface list: Null

    Rack1R3#show ip pim rp

    Rack1R3#show ip pim rp

    Regards

    Amit

     

     

  •  frame-relay map ip 224.0.1.39 503 broadcast
     frame-relay map ip 224.0.1.40 503 broadcast

    I don't know if those are causing the problem but at least they are unnecessary. ip pim nbma-mode does the trick for that part. Please remove those and try again. As you can see, R3 is not receiving discovery-message sent by R5 :

    (*, 224.0.1.40), 21:49:00/stopped, RP 0.0.0.0, flags: DCL
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list:
        Serial1/0, Forward/Sparse-Dense, 21:49:00/00:00:00

    Actually OIL should be null and incoming interface should be Serial1/0 . I think that problem is caused by your extra frame-relay map -statements .

     

  • I have removed the additional config but again no luck so far

    Rack1R5#sh run int s1/0
    Building configuration...

    Current configuration : 457 bytes
    !
    interface Serial1/0
     ip address 155.1.0.5 255.255.255.0
     ip pim nbma-mode
     ip pim sparse-dense-mode
     encapsulation frame-relay
     no ip mroute-cache
     serial restart-delay 0
     no dce-terminal-timing-enable
     frame-relay map ip 155.1.0.1 501 broadcast
     frame-relay map ip 155.1.0.2 502 broadcast
     frame-relay map ip 155.1.0.3 503 broadcast
     frame-relay map ip 155.1.0.4 504 broadcast
     frame-relay map ip 155.1.0.5 503 broadcast
     no frame-relay inverse-arp
    end

    Rack1R3#sh run int s1/0
    Building configuration...

    Current configuration : 413 bytes
    !
    interface Serial1/0
     ip address 155.1.0.3 255.255.255.0
     ip pim sparse-mode
     encapsulation frame-relay
     ip ospf priority 0
     no ip mroute-cache
     serial restart-delay 0
     no dce-terminal-timing-enable
     frame-relay map ip 155.1.0.1 305
     frame-relay map ip 155.1.0.2 305
     frame-relay map ip 155.1.0.3 305
     frame-relay map ip 155.1.0.4 305
     frame-relay map ip 155.1.0.5 305 broadcast
     no frame-relay inverse-arp
    end

    Rack1R3#show ip pim ne
    Rack1R3#show ip pim neighbor
    PIM Neighbor Table
    Neighbor          Interface                Uptime/Expires    Ver   DR
    Address                                                            Prio/Mode
    155.1.0.5         Serial1/0                00:00:16/00:01:28 v2    1 / DR S
    Rack1R3#

    Rack1R3#show ip mro
    Rack1R3#show ip mroute
    IP Multicast Routing Table
    Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
           L - Local, P - Pruned, R - RP-bit set, F - Register flag,
           T - SPT-bit set, J - Join SPT, M - MSDP created entry,
           X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
           U - URD, I - Received Source Specific Host Report,
           Z - Multicast Tunnel, z - MDT-data group sender,
           Y - Joined MDT-data group, y - Sending to MDT-data group
    Outgoing interface flags: H - Hardware switched, A - Assert winner
     Timers: Uptime/Expires
     Interface state: Interface, Next-Hop or VCD, State/Mode

    (*, 224.0.1.39), 00:00:52/stopped, RP 0.0.0.0, flags: DP
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list: Null

    (150.1.5.5, 224.0.1.39), 00:00:00/00:02:59, flags: PT
      Incoming interface: Serial1/0, RPF nbr 155.1.0.5
      Outgoing interface list: Null

    (*, 224.0.1.40), 00:00:58/stopped, RP 0.0.0.0, flags: DPL
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list: Null

    (150.1.5.5, 224.0.1.40), 00:00:32/00:02:28, flags: PLT
      Incoming interface: Serial1/0, RPF nbr 155.1.0.5
      Outgoing interface list: Null

    Now it is learning both group however no group show for 226.1.1.1

    Rack1R3#ping 226.1.1.1 re 4

    Type escape sequence to abort.
    Sending 4, 100-byte ICMP Echos to 226.1.1.1, timeout is 2 seconds:

    Reply to request 0 from 155.1.58.8, 152 ms
    Reply to request 1 from 155.1.58.8, 148 ms
    Reply to request 2 from 155.1.58.8, 176 ms
    Reply to request 3 from 155.1.58.8, 204 ms

    Rack1R3#show ip pim rp

    Rack1R3#show ip pim rp

    Rack1R3#show ip ro
    Rack1R3#show ip route 155.1.58.8
    Routing entry for 155.1.58.0/24
      Known via "ospf 1", distance 110, metric 74, type inter area
      Last update from 155.1.0.5 on Serial1/0, 00:02:19 ago
      Routing Descriptor Blocks:
      * 155.1.0.5, from 150.1.5.5, 00:02:19 ago, via Serial1/0
          Route metric is 74, traffic share count is 1

    Rack1R3#show ip route 150.1.5.5
    Routing entry for 150.1.5.0/24
      Known via "ospf 1", distance 110, metric 65, type intra area
      Last update from 155.1.0.5 on Serial1/0, 00:02:29 ago
      Routing Descriptor Blocks:
      * 155.1.0.5, from 150.1.5.5, 00:02:29 ago, via Serial1/0
          Route metric is 65, traffic share count is 1

    Regards

    Amit Chopra

     

     

     

  • (*, 224.0.1.40), 00:00:58/stopped, RP 0.0.0.0, flags: DPL
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list: Null

    (*, 224.0.1.39), 00:00:52/stopped, RP 0.0.0.0, flags: DP
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list: Null

    You said you configured static RP for groups 224.0.1.39 and 224.0.1.40 but still you don't have RP for them and both groups are pruned. Check your "ip pim rp-address <RP> <access group> " configuration. They also show "dense" mode which means they don't have RP. Group will go into sparse mode when RP is properly configured/found.

  • Actually I have removed the static configuration becuase it is not sloving the issue. R3 is knowing RP via auto rp. if you want I can configure static rp?

    Rack1R3#show ip pim rp mapping
    PIM Group-to-RP Mappings

    Group(s) 224.0.0.0/4
      RP 150.1.5.5 (?), v2v1
        Info source: 150.1.5.5 (?), elected via Auto-RP
             Uptime: 03:28:09, expires: 00:02:29

    Rack1R3#show ip igmp groups

    IGMP Connected Group Membership
    Group Address    Interface                Uptime    Expires   Last Reporter
    224.0.1.39       Serial1/0                03:35:38  00:01:57  155.1.0.5
    224.0.1.40       Serial1/0                03:35:04  00:02:05  155.1.0.3

     

  • You will have to have groups 224.0.1.39 and 224.0.140 in sparse mode in order to receive discovery-information. Define R5 as a RP statically with access-list (permit 224.0.1.40 and 224.0.1.39) and it should go sparse.

     

  • Done , See the config

    On R5

    Rack1R5#show access-lists
    Standard IP access list 1
        10 permit 224.0.1.39
        20 permit 224.0.1.40

    Rack1R5#show ip pim rp mapping
    PIM Group-to-RP Mappings
    This system is an RP (Auto-RP)
    This system is an RP-mapping agent (Loopback0)

    Group(s) 224.0.0.0/4
      RP 150.1.5.5 (?), v2v1
        Info source: 150.1.5.5 (?), elected via Auto-RP
             Uptime: 1d02h, expires: 00:02:37
    Acl: 1, Static-Override
        RP: 150.1.5.5 (?)

    Rack1R5#show ip pim rp
    Group: 226.1.1.1, RP: 150.1.5.5, v2, v1, next RP-reachable in 00:00:47
    Rack1R5#

    On R3

    Rack1R3#show ip pim rp mapping
    PIM Group-to-RP Mappings

    Group(s) 224.0.0.0/4
      RP 150.1.5.5 (?), v2v1
        Info source: 150.1.5.5 (?), elected via Auto-RP
             Uptime: 04:02:28, expires: 00:01:59
    Acl: 1, Static-Override
        RP: 150.1.5.5 (?)

    However , The result is same

    Rack1R3#show ip pim rp

    Rack1R3#show ip pim rp

    Rack1R3#

     

     

  • Guys,

    well , I think , I got the reason if I am right ;

    The IGMP register/join information has encapsulated and registered to RP , the middle/transit or any router do not know the group mapping. In this case SW2 has join 226.1.1.1 and register himself to R5 (RP). Now only SW2 and R5 know the group mapping for 226.1.1.1 but not R3 or any other router. 

    IGMP information is register only to first end router where it directly connect to, also RP knows this information as well because PIM router (The client connect to this LAN/router/segment) send the *,G entry to RP for registration.

    For bordcast domain/Same Lan Segment :

    The another thing is that IGMP communication is happen on multicast address 224.0.0.1 and 224.0.0.2 (leave information) , The 224.0.0.1 is link local address hence R5 and SW2 know that someone is join on 226.1.1.1 group , Both R5 and SW2 is connected into VLAN 58 so if I place another router say SW4 in same VLAN 58 then this router also hear the join information.

    r u agree with this?

     

  • that's certainly not the problem here.

    I just noticed you are missing "ip ospf network point-to-multipoint" under your R5 Serial1/0 interface. Add that and see what happens.

     

     

  • Nope jent , The default OSPF network type of OSPF is Non-bordcast on Frame-relay network , The connection between R5 (HUB) and R3 and R2 is not point to point or point to multipoint.

    The R5 is DR for this segment

    Here is the OSPF adj :

    On R5

    Rack1R5#show ip ospf ne

    Neighbor ID     Pri   State           Dead Time   Address         Interface
    150.1.3.3         0   FULL/DROTHER    00:01:32  155.1.0.3       Serial1/0
    150.1.8.8         1   FULL/DR         00:00:34    155.1.58.8      Ethernet0/0
    Rack1R5#

    On R3

    Rack1R3#show ip ospf ne

    Neighbor ID     Pri   State           Dead Time   Address         Interface
    150.1.5.5         1   FULL/DR         00:01:58    155.1.0.5       Serial1/0
    150.1.7.7         1   FULL/DR         00:00:33    155.1.37.7      Ethernet0/0

    Also , I think what I said is correct becuase i have configured this things to validate this point. You can also test same topology on your rack or IE rack

    Regards

    Amit Chopra

  • Nope jent , The default OSPF network type of OSPF is Non-bordcast on Frame-relay network , The connection between R5 (HUB) and R3 and R2 is not point to point or point to multipoint.

    In my opinion it is exactly point-to-multipoint . From one point (R5 physical serial interface) to multiple endpoints (R5 and R2 sub interfaces). Recall that you have used "frame-relay map ip <ip-address> <dlci> BROADCAST" which actually makes connection to that neighbour broadcast-enabled! It's not a non-broadcast network anymore after you commit that command! Also remember that you'll have to use "ip ospf network point-to-multipoint" in BOTH ends, so R2 and R3 shall have that command under their subinterfaces also. If you do everything correctly, I don't think DR election should occur at all.

  • Hi Guys I have also encounter something similar

    The broadcast keyword provides two functions: it forwards broadcasts when multicasting is not enabled, and it simplifies the configuration of OSPF for nonbroadcast networks that will use Frame Relay.

    "The OSPF broadcast mechanism assumes that IP class D addresses are never used for regular traffic over Frame Relay."

     

    All this stuff is from the command refernace, so the Broadcast command only intefer with the Multicast,

    Change the OSPF to P2M and remove the Broadcast from the frame-relay mapping

  •  

    Jent and dovev - due to busy day ,I have not configued OSPF network type what u have mentioned but seems your right , one more important thing I have figured out is 224.0.1.39 has been pruned on R5 which is HUB and auto rp :

    here is the output :

    Rack1R5#show ip mroute 224.0.1.39
    IP Multicast Routing Table
    Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
           L - Local, P - Pruned, R - RP-bit set, F - Register flag,
           T - SPT-bit set, J - Join SPT, M - MSDP created entry,
           X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
           U - URD, I - Received Source Specific Host Report,
           Z - Multicast Tunnel, z - MDT-data group sender,
           Y - Joined MDT-data group, y - Sending to MDT-data group
    Outgoing interface flags: H - Hardware switched, A - Assert winner
     Timers: Uptime/Expires
     Interface state: Interface, Next-Hop or VCD, State/Mode

    (*, 224.0.1.39), 02:46:21/stopped, RP 0.0.0.0, flags: DCL
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list:
        Serial1/1, Forward/Sparse, 02:46:14/00:00:00
        Loopback0, Forward/Sparse, 02:46:21/00:00:00
        Ethernet0/0, Forward/Sparse, 02:46:21/00:00:00
        Serial1/0, Forward/Sparse-Dense, 02:46:21/00:00:00

    (150.1.5.5, 224.0.1.39), 02:46:20/00:02:46, flags: LT
      Incoming interface: Loopback0, RPF nbr 0.0.0.0
      Outgoing interface list:
        Serial1/1, Forward/Sparse, 02:46:14/00:00:00
        Ethernet0/0, Forward/Sparse, 02:46:20/00:00:00
        Serial1/0, Prune/Sparse-Dense, 00:01:21/00:02:09  <------------------------- The Outgoing interface towards R3 has been Pruned  (S1/0)

    Cheers

    Amit

     

  • your 224.0.1.39 is still missing RP. Have you configured static RP for 224.0.1.39 and 224.0.1.40 also on R5 ?

    You should also change your configuration on R5's multipoint interface to sparse-mode (instead of sparse-dense-mode) because NMBA PIM only works on sparse-mode interfaces.

    Actually it doesn't matter if 224.0.1.39 is pruned in OIL. It's used only to RECEIVE RP-announcements from RP-candidates. 224.0.1.40 is used to send out discovery-messages. Make sure that 224.0.1.40 is not pruned on any outgoing PIM-enabled interface (it shouldn't be). Both groups operate in dense-mode under normal conditions, that is why you need pim autorp listener with NMBA interfaces. You also need the command is you run your network in pure sparse-mode.

     

  • Jent - I did all thing which u have mentioned but still facing same problem , Please check my config on R5 and R3.

    R5 Config :-

    Multipoint network

    interface Serial1/0.1 multipoint
     ip address 155.1.0.5 255.255.255.0
     ip pim sparse-mode
     ip ospf network point-to-multipoint
     frame-relay map ip 155.1.0.3 503 broadcast
     no frame-relay inverse-arp

    Rack1R5#sh ip os
    Rack1R5#sh ip ospf int s1/0.1
    Serial1/0.1 is up, line protocol is up
      Internet Address 155.1.0.5/24, Area 0
      Process ID 1, Router ID 150.1.5.5, Network Type POINT_TO_MULTIPOINT, Cost: 64
      Transmit Delay is 1 sec, State POINT_TO_MULTIPOINT,
      Timer intervals configured, Hello 30, Dead 120, Wait 120, Retransmit 5
        oob-resync timeout 120
        Hello due in 00:00:24
      Supports Link-local Signaling (LLS)
      Index 3/3, flood queue length 0
      Next 0x0(0)/0x0(0)
      Last flood scan length is 1, maximum is 1
      Last flood scan time is 4 msec, maximum is 4 msec
      Neighbor Count is 1, Adjacent neighbor count is 1
        Adjacent with neighbor 150.1.3.3
      Suppress hello for 0 neighbor(s)

    Rack1R5#show ip ospf ne

    Neighbor ID     Pri   State           Dead Time   Address         Interface
    150.1.3.3         0   FULL/  -        00:01:48    155.1.0.3       Serial1/0.1

    Rack1R5#show access-lists
    Standard IP access list 1
        10 permit 224.0.1.39
        20 permit 224.0.1.40
        30 permit 226.1.1.1 (24 matches)
    Rack1R5#show ip pim rp mapping
    PIM Group-to-RP Mappings
    This system is an RP (Auto-RP)
    This system is an RP-mapping agent (Loopback0)

    Group(s) 224.0.0.0/4
      RP 150.1.5.5 (?), v2v1
        Info source: 150.1.5.5 (?), elected via Auto-RP
             Uptime: 04:16:27, expires: 00:02:29
    Acl: 1, Static-Override
        RP: 150.1.5.5 (?)

    Rack1R5#show ip pim rp           <----------------Note
    Group: 232.1.1.1, RP: 150.1.5.5, v2, v1, next RP-reachable in 00:00:42
    Group: 224.1.1.1, RP: 150.1.5.5, v2, v1, next RP-reachable in 00:00:42
    Group: 226.1.1.1, RP: 150.1.5.5, next RP-reachable in 00:00:44
    Group: 224.2.2.2, RP: 150.1.5.5, v2, v1, next RP-reachable in 00:00:42
    Group: 224.3.3.3, RP: 150.1.5.5, v2, v1, next RP-reachable in 00:00:42
    Group: 224.4.4.4, RP: 150.1.5.5, v2, v1, next RP-reachable in 00:00:42
    Group: 224.5.5.5, RP: 150.1.5.5, v2, v1, next RP-reachable in 00:00:42
    Group: 224.6.6.6, RP: 150.1.5.5, v2, v1, next RP-reachable in 00:00:42
    Group: 238.8.8.8, RP: 150.1.5.5, v2, v1, next RP-reachable in 00:00:42
    Group: 239.9.9.9, RP: 150.1.5.5, v2, v1, next RP-reachable in 00:00:42
    Group: 224.9.9.9, RP: 150.1.5.5, v2, v1, next RP-reachable in 00:00:42
    Group: 224.10.10.10, RP: 150.1.5.5, v2, v1, next RP-reachable in 00:00:42

    On R3 Config

    interface Serial1/0.1 multipoint
     ip address 155.1.0.3 255.255.255.0
     ip pim sparse-mode
     ip ospf network point-to-multipoint
     frame-relay map ip 155.1.0.5 305 broadcast
     no frame-relay inverse-arp
    end

     Rack1R3#show access-lis
    Rack1R3#show access-lists
    Standard IP access list 1
        10 permit 224.0.1.39
        20 permit 224.0.1.40
    Rack1R3#
    Rack1R3#show ip pim rp ma
    PIM Group-to-RP Mappings

    Group(s) 224.0.0.0/4
      RP 150.1.5.5 (?), v2v1
        Info source: 150.1.5.5 (?), elected via Auto-RP
             Uptime: 00:13:11, expires: 00:02:41
    Acl: 1, Static-Override
        RP: 150.1.5.5 (?)
    Rack1R3#

    ip pim auto listener command enabled on all router

    Rack1R3#show ip mroute
    IP Multicast Routing Table
    Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
           L - Local, P - Pruned, R - RP-bit set, F - Register flag,
           T - SPT-bit set, J - Join SPT, M - MSDP created entry,
           X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
           U - URD, I - Received Source Specific Host Report,
           Z - Multicast Tunnel, z - MDT-data group sender,
           Y - Joined MDT-data group, y - Sending to MDT-data group
    Outgoing interface flags: H - Hardware switched, A - Assert winner
     Timers: Uptime/Expires
     Interface state: Interface, Next-Hop or VCD, State/Mode

    (*, 224.0.1.39), 00:21:45/00:02:21, RP 0.0.0.0, flags: DC
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list:
        Serial1/0.1, Forward/Sparse, 00:21:45/00:00:00

    (*, 224.0.1.40), 00:21:52/stopped, RP 0.0.0.0, flags: DCL
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list:
        Serial1/0.1, Forward/Sparse, 00:21:52/00:00:00

    (150.1.5.5, 224.0.1.40), 00:15:13/00:02:41, flags: PLT
      Incoming interface: Serial1/0.1, RPF nbr 155.1.0.5
      Outgoing interface list: Null

    Rack1R3#show ip pim rp

    Rack1R3#show ip pim rp

    Rack1R3#show ip pim rp

    Rack1R3#

     

     

     

  • have you removed the broadcast keyword from the frame-relay map ?

  • interface Serial1/0.1 multipoint

    When did you change all the configuration from physical interface to subinterface on R5 ? Now your R3 subinterface mode is also multipoint while it is not! R3 only has connection to R5, right? Proper config would be something like this :

     

    R5:

    interface Serial1/0

    encapsulation frame-relay
     ip address 155.1.0.5 255.255.255.0
     ip pim sparse-mode
     ip pim nbma-mode
     ip ospf network point-to-multipoint
     frame-relay map ip 155.1.0.3 503 broadcast
     frame-relay map ip 155.1.0.2 502 broadcast
     no frame-relay inverse-arp

    R2/R3:

    interface Serial1/0
     encapsulation frame-relay

    interface Serial1/0.1 point-to-point
     ip address 155.1.0.3 255.255.255.0
     ip pim sparse-mode
     ip ospf network point-to-multipoint
     frame-relay map ip 155.1.0.5 305 broadcast

    And then the static RP mapping as you have done correctly above. If it still doesn't work - I don't get it.

     

     

  • Hi, sorry for the late response, I will be more detial: The problem, is with the command:

    frame-relay map ip x.x.x.x broadacst suppose to be frame-relay map ip x.x.x.x

    The broadcast keyword is the readon you have encapsulation failed for multicast packet,

    ofcourse when you remove the broadcast key word you will have to make OSPF unicast meaning P2M nonbroascat

    good luck !

  • frame-relay map ip x.x.x.x broadacst suppose to be frame-relay map ip x.x.x.x

    I'll have to disagree on that. I've created a similar setup and it works just fine with broadcast option enabled. Afaik that option only means that all broadcast traffic is encapsulated and unicasted to ip-address defind in frame-relay map -statement. It shouldn't have any effect on multicast.

  • Jent You are correct the Broadcast is irelavent, I brought up the same configuration,  and I never seen the ancapsulation fail on the RP router BUT I did saw the the problem on R3,

    when configuration with ip pim autorp listner is enabled it's override the the static RP configuration tried also with  the ovveride keyword, when removing

    no ip pim autorp listner

    the rp is staticly mapped again

     

    Dovev.

Sign In or Register to comment.