2 Peer-KeepAlive statements

Hi ,



Any one has tried and tested 2 peer-Keepalive statements under Vpc domain command on 5ks. will it work ?



Vpc domain 10

peer keep-alive desx x.x.x.x {MGMT interface}

peer-Keepalive dest x.x.x.x vrf xx source x.x.x.x



Reason if Management interface link goes down , there wont be any redundancy in first place.

Comments

  • I am not sure if that would be required; based on my understanding…,  There is a heartbeat that crosses the peer-links ,   if the peer-links go down that is when the switches reference the keepalive status via the dedicated keepalive link/config.

    If my understanding is correct, the keepalive link is “the backup”, so unless you are looking for redundancy for the backup .. it should not be needed.

  • You could probably do it.  I would recommend using a L3 port-channel placed in a separate VRF and run your vPC peer-keepalive over that and not use the management interface.  This way you have multiple connections providing physical cabling redundancy for the keepalive and using a separate VRF avoids having issues where the peer-keepalive traffic would traverse the peer-link.

  • I haven't tried it with different vrfs, but when I've done it in same vrf, it seems to overwrite.

     

    As mentioned, if th peer keep-alive goes down, VPC will use the peer link to still confirm the peer is aliive, so it's not really needed.

     

     

  • peetypeety ✭✭✭

    On the 5k, the fundamental need is for the peers to be reachable over the keepalive path at VPC initialization time. Once they're up, the peers communicate over the peer link. If the peer link goes down and the keepalive is up, they can negotiate which side should deactivate its VPCed ports (usually it'd be the operational secondary, unless the operational primary already had those ports in link-down). If the peer link goes down and the keepalive is already down, the operational secondary will deactivate all VPCed ports to prevent a black hole, and you'd need to restore both links to get VPC functional again.

  • If the peer keepalive link is down and then the peer-link goes down (assuming both peers are healthy and that's only a connectivity problem between the two) you will have split brain and both VPC nodes will claim the primary role.

    This will more than likely cause network disruption as both nodes could appear with the same bridge ID STP-wise and flood the same MAC addresses in the network.

    My understanding is that the VPC peer-keepalive link is used to prevent split brain scenarios like this:

    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/operations/n5k_vpc_ops.html#wp425312

    HTH

     

     

  • Thanks Guys , My question is can i have 2 peer-Keepalive statements under vpc domain .

     

     

  • 1 Peer-keep alive statement with MGMT interface as source and dest

    and another are VRF , can i have both peer-keepalive statements under vpc domain.

    right now we have MGMT interface as keepalive , considering 5ks doesnt have 2 Mgmt interfaces for Redundancy , i plan to use VRF also along with MGMT interafce as keepAlive, can i have both statements

    under vpc domain , has anyone tried this . please let me know

  • You can't have two peer-keepalive statements for your vpc domain:

    N5K-p5-1(config-vpc-domain)# peer-keepalive destination 10.0.8.210
    Note:
     --------:: Management VRF will be used as the default VRF ::--------
    N5K-p5-1(config-vpc-domain)# sh run vpc
    [...]
    feature vpc

    vpc domain 55
      peer-keepalive destination 10.0.8.210

    N5K-p5-1(config-vpc-domain)# peer-keepalive destination 55.55.55.2 vrf KEEPA source 55.55.55.1
    N5K-p5-1(config-vpc-domain)# 2015 Mar  9 06:54:37 N5K-p5-1 %VPC-2-PEER_KEEP_ALIVE_SEND_FAIL: In domain 55, VPC peer keep-alive send has failed

    N5K-p5-1(config-vpc-domain)# sh run vpc
    [...]
    feature vpc

    vpc domain 55
      peer-keepalive destination 55.55.55.2 source 55.55.55.1 vrf KEEPA

    This seems to be platform independent therefore it applies to N7K as well.

    If you really want to add redundancy to your peer-keepalive connection you can create a dedicated VLAN with SVIs (using "management" option) and a dedicated vrf (or a L3 port-channel  if you have an L3 daughter card).

    In case you have a plain N5K (no L3 card) you will need to trunk this vlan into a dedicated port-channel link (therefore doubling your back to back links) and manually prune it on all other L2 links (including your peer-link) to avoid STP TCNs will disrupt your VPC domain.

    Anyway as it was stated a few posts ago by someone else, I think that it's way too much effort to add redundancy given the benefits you will get as peer-keepalive it's used to avoid split brains in case the vpc goes down.

    If you lose the management, you won't be able to SSH into your device out of band and an SNMP event should be triggered on your management systems to have someone go and check immediately.

    HTH

     

  • peetypeety ✭✭✭

    Anyway as it was stated a few posts ago by someone else, I think that it's way too much effort to add redundancy given the benefits you will get as peer-keepalive it's used to avoid split brains in case the vpc goes down.

    This. OP, you're overthinking this. The peer-keepalive prevents "brains on drugs" if the peer link goes down. Guess what? If the peer link goes down, even if your brains are still coordinated, you're going to have a bad day: half of your VPC ports are going to go dark, because the VPC pair knows that it cannot safely support the traffic patterns possible.

    So...stack the deck so your peer link doesn't go down. Run several links. Mix copper and fiber so one "bump" isn't as likely to disturb all paths. Add the expansion card(s) with XE ports, and put some of your links on the cards.

    If you have a 5k pair where you're that paranoid about the peer-keepalive going down, a dead peer-keepalive is NOT your problem. You've got the wrong box for the job: Upgrade to a 7K and stripe the peer link across disparate cards. Or you've got a flawed design that puts too much focus on that particular switch pair: explore FabricPath, or go back to STP and build a pair of pairs (two independent 5k pairs, each one VPCed within its pairing but each switch pair acting as a unique node within the overall STP design).

Sign In or Register to comment.