Nexus host prefixes routing BGP for docker infrastructure
We own DC with OpenStack cloud platform. There’s a need for team responsible for cloud service to get more performance, so one of possibilities would be to run a project called CALICO (https://docs.projectcalico.org/v2.6/reference/private-cloud/l3-interconnect-fabric) . This project requires connecting BGP to Network equipment. Approximately in every NEXUS device there would be ~200 BGP sessions and ~7k host (/32) learned via BGP protocol. We’ve wide range of device models:
From Nexus5548 to Nexus9000. From the hardware limits perspective there’re limits.
For example Nexus 5548 could support only 7200 dynamic routes. And number of BGP sessions. In some cases it would possible to work around this issue adding a box like quagga type. But this doesn’t sound good in network design case.
Also from network design case, it’s quite strange to run BGP protocol with such a number of routes in pure DC equipment (talking about NX-OS devices). Maybe you have some kind of observations about this setup ? Strange, that there’re to little info about interconnecting Pure networking devices with virtual containers.
P.S also the in cisco specifications page there're two limits. Verified topology and Maximum limits. How to know if the Maximum limits will be available for your setup ?