
5k vpc peerlink
hi guys,
newbie here,
not sure which nexus video i watched where brian mentions the point of not running data traffic over the vpc peerlink.
if i have take this out of contact i applogize.
is there any urls that reference this.
i've been task to config a "single-homed" vpc at the server level design with 2 5ks and a 40gig vpc peerlink between the two.
The edge asa firewall has been proposed as an active/ standby.
as i look at this design my question is :
won't the lacp active active traffic on the B side need to cross the vpc peerlink in order pass through the A side where the active ASA is connected?
should this be an issue?
or due to the 40gig vpc peerlink this won't be a concern.
i'm aware of the new "evpc" that's available but that design concpet of the evpc has been shot down by management.
thoughts suggestions comments welcomed.
thank you in advance.
Comments
The reason that you usually don't want traffic over the peer-link is because the aggregate bandwidth to the servers may be significantly higher than the peer-link.
If the ASA is connected to only one nexus, the traffic will use the peer-link, but if its connected to both nexus using vpc, the traffic will be forward directly...
You have 40Gb as vpc peer-link, it is really up to the environment to know if it will become the bottleneck or not... but using the peer-link to data traffic, probably won't scale that well.
I agree that this is not very well explained. I browsed a few documents and there's nothing clear about what happens when traffic is forwarded over the vpc peer-link. We understand that the main reason is that the CFSoE traffic cannot be dropped in any circumstances but more information should be available about that. I have a scenario with a 20G peer-link on a hybrid scenario with vpc and non-vpc vlans over the peer-link. The traffic over the peer-link is above 1G and nothing strange happened. Now if the traffic goes to the limit, problems will occur for sure. There isn't too much information about CFS. CFS messages are encapsulated in standard Ethernet frames (with CoS 6), that's all i found.
Antonio Soares, CCIE #18473 (RS/SP/DC)
[email protected]
http://www.ccie18473.net
thank you for those who have responsed.
much appreciated.
management is stuck on the signle-homed design which to me doesn't make sense.
i've made several attempts looking for docs backing up brian's statements but haven't found any as of yet.
cordially,
~ej
Richard Buckminster Fuller There is no such thing as a failed experiment, only experiments with unexpected outcomes.
As SChan noted and I think what Brian referred to in the Nexus Video was the potential bottleneck of the peer link as the biggest gotcha. There is a blog post by Brad Hedlund that expands on the adage "No L3 routing over the Peer Link" that may be of interest if you have not already come across it in your travels. He has an example or two with firewalls and mentions that as long as there isn't a routing adjacency between the two, the design should work. Not certain if this, or what he describes in his post, aligns with the path management has you forwarded down or not.
I think it's more of a design objective: don't build a design where endless amounts of data has to cross the peer link. In other words, don't create a peer link, then put 44 single-homed servers on switch 1 and 44 single-homed servers on switch 2, with 2 ports across and 2 ports up per side.
It's slightly less of a risk on N5K than N7K, as the N5K has the out-of-band peer-keepalive link. If the peer link gets full with data traffic, the keepalive link can keep the peer link stable.
What do you mean by the N5K has the oob peer-keepalive link ? If i understood well, you are talking about doing the keepalive over the management interface. You can do the same on the 7K. But a basic rule is to not do the keepalive over the peer-link. In this case, and imagine you only have one M1-10GE module in each 7K, it can happen the worst case scenario where the peer-link and the peer-keepalive link go down at the same time. The dual brain situation happens. The original question was more related with the drops that can happen to the CFS traffic (peer-link) and not to the UDP traffic used by the peer-keepalives.
Antonio Soares, CCIE #18473 (RS/SP/DC)
[email protected]
http://www.ccie18473.net
Hi Eric
The general rule for VPC peer link to be use for data plane traffic is when there is downstream VPC port member goes down, then VPC peer link will be used to forward the data plane traffic for that perticular flow.
In your perticular case, I think it has to cross the VPC peer link to reach to active ASA as the way it is designed. If you have your setup ready, you can do mac address resolution of active ASA firewall interface from server and Nexus and track the flow.
If it was me, I would have surely done that and checked by myself and make sure its crossing VPC peer link.
Thanks
Kiranb
You actually *can’t* do the keepalive over the peer link, because the peer link doesn’t come up until the keepalive is up. You basically run into a chicken/egg problem where the keepalive can’t come up because the peer link isn’t up, but the peer link can’t come up because the keepalive isn’t up. They need to take two separate physical paths, like MGMT0 for keepalive and then inband links for peer link.
Brian McGahan, CCIE #8593 (R&S/SP/Security), CCDE #2013::13
[email protected]
Internetwork Expert, Inc.
http://www.INE.com
From: [email protected] [mailto:[email protected]] On Behalf Of Antonio Soares
Sent: Friday, June 21, 2013 12:18 PM
To: Brian McGahan
Subject: Re: [CCIE DC] 5k vpc peerlink
What do you mean by the N5K has the oob peer-keepalive link ? If i understood well, you are talking about doing the keepalive over the management interface. You can do the same on the 7K. But a basic rule is to not do the keepalive over the peer-link. In this case, and imagine you only have one M1-10GE module in each 7K, it can happen the worst case scenario where the peer-link and the peer-keepalive link go down at the same time. The dual brain situation happens. The original question was more related with the drops that can happen to the CFS traffic (peer-link) and not to the UDP traffic used by the peer-keepalives.
INE - The Industry Leader in CCIE Preparation
http://www.INE.com
Subscription information may be found at:
http://www.ieoc.com/forums/ForumSubscriptions.aspx
Hi Eric
Thats a screen shot from Cisco's presentation of N7K vPC best practices
HTH
again i appreciate all who have taken the time to weigh in on the subject matter.
after further review my cowork and i have decided on creating a vpc between the 2 upstream ASA in our design.
which in our view would eliminate traffic crossing the peer link in the event of a failure.
management hasn't submitted their final approval to our (their) final design, but i will keep you posted.
if anyone is of a different opinion we would like to hear from you.
cordially,
~ej
Will be the ASAs in routed or transparent mode ? Do you plan to use dynamic routing ?
Antonio Soares, CCIE #18473 (RS/SP/DC)
[email protected]
http://www.ccie18473.net
Hi Eric
We have 3750 dual connected to both Nexus and FW is connected to 3750.
HTH
Regards
Kiran
Update
Again another push back today.
our original design called for the ASAs to be OSPF stubs. (Denied)
The mission:
Peer with our client using two 3560x BGP dual homed for redundancy using a private AS number as the demarc.
ASA using OSPF (shot down) as stubs.
OPSF area 0 for the 5Ks.
We intended to use evpc (Rejected ) as our topology between the 2 5ks with the L3 module with the 6 2248TP
Although everyone has been very helpful responding back i still haven't found the smoking gun ( doc ) to refute the single homed design.
There is no doubt this network will grow out of control and we will need to go revisit this design.
oil well
thanks all
I am not helpful here, but why do you think enhanced vPC is better? I'd prefer vPC + dual-attached servers design for simplicity. Read this blog which may give you another perspective: http://rednectar.net/2012/08/30/why-i-wouldnt-bother-with-enhanced-vpc/
Thanks for sharing
It is possible to do it. First configure it like in the Nexus Workbook:
- Nexus 7K Virtual Port Channels (vPC) with SVI Keepalive
Then add the Vlan 999 to the Port-channel 1 (vPC) Peer-Link. After this we can shutdown Port-channel 2. The end result is this one:
N7K5# sh vpc
Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 1
Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status : success
Per-vlan consistency status : success
Type-2 consistency status : success
vPC role : secondary
Number of vPCs configured : 0
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
Auto-recovery status : Disabled
vPC Peer-link status
---------------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ --------------------------------------------------
1 Po1 up 1,999
N7K5#
N7K5# sh spanning-tree vlan 999
VLAN0999
Spanning tree enabled protocol rstp
Root ID Priority 33767
Address 0026.980c.2143
Cost 1
Port 4096 (port-channel1)
Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec
Bridge ID Priority 33767 (priority 32768 sys-id-ext 999)
Address 68bd.abd7.6043
Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec
Interface Role Sts Cost Prio.Nbr Type
---------------- ---- --- --------- -------- --------------------------------
Po1 Root FWD 1 128.4096 (vPC peer-link) Network P2p
N7K5#
Not recommended at all on a production environment. But a good lab exercise.
Antonio Soares, CCIE #18473 (RS/SP/DC)
[email protected]
http://www.ccie18473.net