PIM RPF VECTOR and INTER-AS Option B

I'm configuring mVPN inter-as option B (VPNv4+MDT). I see the Proxy vector is working without the use of explicit command on Global. But to work under VRF must I use the explicit command under vrf (ip multicast vrf A rpf proxy rd vector) all seem work correctly. I attached the scheme

 

I appreciate comments about  this issue...

Follow some show from PE

 

R1#sh run | s mult

ip multicast-routing

ip multicast-routing vrf A

ip multicast vrf A rpf proxy rd vector

 

R1#sh ip pim neighbor

12.12.12.2        Serial1/0                00:07:43/00:01:23 v2    1 / S P G  <== !! Proxy was negotiates with neig...

 

R1#show ip mroute

IP Multicast Routing Table

Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,

       L - Local, P - Pruned, R - RP-bit set, F - Register flag,

       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,

       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,

       U - URD, I - Received Source Specific Host Report,

       Z - Multicast Tunnel, z - MDT-data group sender,

       Y - Joined MDT-data group, y - Sending to MDT-data group,

       V - RD & Vector, v - Vector

Outgoing interface flags: H - Hardware switched, A - Assert winner

 Timers: Uptime/Expires

 Interface state: Interface, Next-Hop or VCD, State/Mode

 

(1.1.1.1, 232.0.0.1), 00:06:39/00:02:44, flags: sT

  Incoming interface: Loopback0, RPF nbr 0.0.0.0

  Outgoing interface list:

    Serial1/0, Forward/Sparse, 00:06:39/00:02:44

 

(7.7.7.7, 232.0.0.1), 01:07:43/stopped, flags: sTIZV

  Incoming interface: Serial1/0, RPF nbr 12.12.12.2, vector 3.3.3.3

  Outgoing interface list:

    MVRF A, Forward/Sparse, 01:07:43/00:01:49

 

(*, 224.0.1.40), 01:07:43/00:02:21, RP 0.0.0.0, flags: DCL

  Incoming interface: Null, RPF nbr 0.0.0.0

  Outgoing interface list:

    Loopback0, Forward/Sparse, 01:07:43/00:02:21


 

R1#ping vrf A 239.0.0.1

Type escape sequence to abort.

Sending 1, 100-byte ICMP Echos to 239.0.0.1, timeout is 2 seconds:

Reply to request 0 from 77.77.77.77, 668 ms

R1#

 

 

image

Comments

  • Hi,

     

    The rpf proxy in the VRF is used for the PE and no need to put it on the global except of course for the P who are unaware of the VRFs and must "understand" the vector in the PIM message. The PE must have it on the vrf because he is responisble in building the MDT for the VRF.

    For more information you can refer to the INE blog: http://blog.ine.com/2010/01/17/inter-as-mvpns-mdt-safi-bgp-connector-rpf-proxy-vector/

    Hope it's clear!

  • Hello,

    this is from Cisco TAC:

    ip multicast vrf A rpf proxy rd vector’ – This command is only required on PE boxes. When we configure this command then vector information is injected in the PIM Hello packets.

     

    By default, a router will receive the RPF Vector and when a router receives a PIM join that contains an RPF Vector, that router stores the RPF Vector so that it can generate a PIM join to the exit ASBR router. P routers, thus, learn the RPF Vector from PIM joins. The RPF Vector is advertised to all P routers in the core.

    Hence there is nothing needed on the P routers. We can configure ‘don’t accept PIM proxy RPF info’ using ‘ip multicast rpf proxy disable’ command.

     

    Conclusion:

    So all we need is ‘ip multicast vrf A rpf proxy rd vector’ on the PE boxes and nothing else as far as Proxy Vector configuration is concerned and nothing on P boxes at all and nothing else in global configuration.

  • Maybe there were an IOS change behavior or a misunderstanding of the Cisco Doc! In your setup it works without, can you do a show run all to see if it is in the default configuration?

Sign In or Register to comment.