CCIE RSv5 Equipment Build

Edit: This thread is getting too long, and it is now closed.  Please post in a more detailed thread below instead:

 

 

 

Use this thread for Q&A on how to build INE's new CCIE RSv5 topology, either in physical hardware or virtualization.  This thread will later be compiled into the new "How To Build A CCIE Rack" page.

«13456720

Comments

  • A full build will need to consist of the following:

    QTY 20 (est.) x IOS routers running IOS Version 15.4T with IP Base, Data, & Security Feature Sets

    OR

    QTY 20 (est.) x IOS XE routers running IOS XE Version 3.11S (15.4S) with Premium Feature Set 

    AND

    QTY 4 x Catalyst IOS switches running Catalyst IOS Version 15.0SE with IP Services Feature Set 

     

    A final count of the number of routers will be available shortly.

  • I know you guys have a CSR1000V video coming up but here are some questions I have about building a V5 lab

     

    1. Can this be done with one large server running ESXI and 20+ CSR instances?

    I'm thinking the minimum specs are a dual quad core nehalem XEON (sixteen virtual cores) with 96GB of RAM or more. You can buy some of these used servers off of Ebay at a reasonable $1500-2500 bucks. I know renting is cheaper but having your own lab has it's benefits as well. 

    2. If #1 is a yes, then how does the server connect to the four switches?

    3. How many NIC ports are needed for such a topology?

    Most servers have four copper gig ports and you can buy additional quad port cards if they are needed. However, I assume the CSR/VMWARE system can allow the virtual routers to communicate to each other within the box.

     

     

  • 1 - it depends on the server hardware but yes, it's possible to run multiple CSR's on 1 ESX server.  I have 6 CSR1000v's and 2 XRv's running at the moment and it's using about 21GB of memory.  I'm using the default settings and saw you can lower the amount of memory each instance requires but the default is 4GB.  I think you can lower it to 3GB if I recall.

    2 - a breakout switch works for me.  I have mine going through a 3560 out to my external 3550's (which I need to upgrade) using q-in-q tunnels.  My 3560 is only a 24 port but will probably need a 48-port switch now if I were to build the 20+ node topology (which I'm interested to see what INE has designed).

    3 - I have 1 NIC trunked to my breakout switch and 1 going to my home network.

    In order to be able to establish a console connection to the virtual routers, you need to have an Enterprise license on the ESX server.  You get a 60-day eval if you don't install the free licenses that are available.  I just went through rebuilding my ESX server because I went past the 60 day mark.  Rebuilding took all of about 30-45 minutes to reinstall ESX, run my script to rebuild the networking, and add my VM's back in.  I have ESX running on an external USB stick and my storage is local to the server.

  • So here's the details on the CSR1000v memory requirements.  Looks like you can lower it to 2.5GB with some limitations.  Those limitations wouldn't really matter for a lab environment.

    The following limitations have been
    observed on the Cisco CSR 1000V with the 1 vCPU configuration with 2.5
    GB of RAM allocation on VMware ESXi:


    image
    If the memory Hot-Add option is enabled, and the Cisco CSR 1000V is
    powered on with 2.5GB initial memory, then the RAM allocation can only
    increase to a maximum of 3 GB. The system does not allow upgrading to
    more than 3GB of RAM allocation. The Virtual Machine Properties windows
    shows “Maximum Hot-Add Memory for this Power is 3 GB”.


    image
    If the Cisco CSR 1000V is powered on with 3GB initial RAM allocation,
    then the Hot-Add memory option doesn't work, and the option to select
    memory remains greyed out with the same message on the Properties
    windows, “Maximum Hot-Add Memory for this Power is 3 GB”.


    image
    If the Cisco CSR 1000V is powered up with 4GB initial RAM allocation,
    then the Hot-Add options works and you are able to add up to 64 GB of
    memory.

    So, with that, 20 instances would need about 50GB of memory.

  • one more note - I just changed one of my CSR1000v's allocating only 2GB and started up just fine.  I'm wondering if that would cause any issues while labbing though.  I'll play around with it and see while leaving the others with 2.5GB allocated.

  • Yes you can run them all on one server.  Here's a screenshot of one of my dev boxes: http://i.imgur.com/hLQcMjV.png

    It's Dual 8 Core Xeon E5 with 384GB of RAM, but with 20 x CSR1000v instances plus 2 x Win2k8 Server instances it's barely at 20% CPU of all cores and just under 64GB RAM.

    For NICs you realistically only need 2 if you want to go to external switches.  One for management of ESXi itself (the VMKernel) and then 1 to plug into one of the switches.  You could goto a breakout switch but it's not really needed.

    You'll see shortly that with our content, the switching portion is essentially isolated from the routing part.  When you get to full-scale labs and full-scale troubleshooting there will be the routers and switches talking to each other, but for learning the technologies you could have your routers separate from your switches and it wouldn't really matter.

  • In ESXi you can oversubscribe RAM, which will cause it to swap to disk.  If you're running off an SSD then it really doesn't matter.  I would bet that you could get away with about 16GB of physical RAM and still boot 20 instances, and then if the routers need it ESXi will just swap to disk.

    The specs only really matter if you're actually forwarding production traffic through the CSR, which for lab purposes we're not.

    CPU is different though, if all physical cores of the physical host are at 100% then the CSR instances will start to crash.

  • ok I was curious on the oversubscription of memory.  Then I'm not going to mess with lowering my memory per instance just to avoid any possible issues while labbing.

     

  • If you already have the RAM there's no reason not to assign it to them.  It's only an issue if you have a smaller server that you want to squeeze more instances out of.

  • Also you don't need fast RAM & CPU you just need bulk RAM & CPU.  This can greatly affect the price of a server obviously.  E.g. you're better off with CPUs that have more slow cores than CPUs with less fast cores.

  • I notice that the new lab book still only requires 6 routers and 4 switches (http://www.amazon.com/Routing-Switching-Configuration-Practice-Practical-ebook/dp/B00IMG97L2/ref=sr_1_6?s=books&ie=UTF8&qid=1397856542&sr=1-6&keywords=Ccie+v5)

    Right now I'm busy going through wb2.  I've done home made labs for dmvpn and IPSec.   

    Will I be able to use the new v5 WB utilising my existing rack (6R+4SW) or are you now moving to virtual only (I can't see many people buying 20+ routers etc).  If you are, would it be ok to simply continue using WB2, taking care to skip sections that are no longer in the exam, and add in extra labs for what also needs to be added?

    cheers

  • Also you don't need fast RAM & CPU you just need bulk RAM & CPU.  This can greatly affect the price of a server obviously.  E.g. you're better off with CPUs that have more slow cores than CPUs with less fast cores.

    I have made a test as well this morning but on different configuration, my server is running Windows 2008R2 and VMVare but in the end resource consumtion are kind of equal with Brian's. The pictures below were taken after completely iniliazed 20 CSR-1000V

    I tend to agree with Brian, there's a need of a lot of memory. CPU is highly utilized at startup but this is normal when you start 20 VMs at once. Disk I/O will go up to the roof and will keep busy the CPU as well. But after the machines boot up everything is just quiet. I have observed something else, as well, few of the VMs have had a kernel panic, but they rebooted by themselves and in the end everything went just fine.

     

     

    Gabriel

     

     

    image

     

    image

  • Kernel panic was probably from booting them all at once and the disk I/O.  The way I found to fix this is just you need to stagger rebooting them, like reboot only 5 at a time instead of all 20.

  • Will I be able to use the new v5 WB utilising my existing rack (6R+4SW) or are you now moving to virtual only (I can't see many people buying 20+ routers etc).  If you are, would it be ok to simply continue using WB2, taking care to skip sections that are no longer in the exam, and add in extra labs for what also needs to be added?

    No, you won't be able to do the full-scale labs with 6+4.  For full-scale labs and troubleshooting labs the topology will be large.  The technology labs (volume 1) there's a lot you can do with just a handful of routers.  Only larger topics like MPLS L3VPN need more routers to do.

  • Any more info on how and where to get such a powerful server? I currently have an i7 + 16GB, but willing to get a server like that in order to be able to emulate larger topologies.

  • I'm planning to buy a 3 to 5 year old server off of eBay. Search for an old nehalem processor like an X5550. An X5550 is quad core, with hyperthreading or 8 virtual cores. Yeah, it's a few years old but processor performance hasn't been improving as much as you might think. 

    Anyway, you can buy an older system with two quad core processors that have 16 virtual cores. That should be enough for 20 CSR routers in a lab scenario. 

    So go on eBay and search "dell precision x5550 96" and find a dual processor workstation with 96GB of ram. 

     

     

  • Any more info on how and where to get such a powerful server? I currently have an i7 + 16GB, but willing to get a server like that in order to be able to emulate larger topologies.

    I would start with checking to see how many CSR1000v instances you can boot with your current server.

  • Kernel panic was probably from booting them all at once and the disk I/O.  The way I found to fix this is just you need to stagger rebooting them, like reboot only 5 at a time instead of all 20.

    Yes Brian, I guess that's the way to do it. Even starting/rebooting 5 at a time it won't take more than 15 minutes to have all of them up and ready.

    Another question raises now. How many physical NICs a server should have for these 20 routers and how they'll be connected to the switches? Or is still not decided yet?

    Gabriel

  • I would start with checking to see how many CSR1000v instances you can boot with your current server.

    Thanks Brian, which hardware are you currently using for that setup?

  • Just one to the switches. There's not really a need for multiple ones, there's nothing it adds plus it complicates STP with the vswitch and any breakout switches. 

    Brian McGahan, 4 x CCIE #8593 (R&S/SP/SC/DC), CCDE #2013::13
    bmcgahan@INE.com
     
    Internetwork Expert, Inc.
    http://www.INE.com

    On Apr 18, 2014, at 7:27 PM, "ciocang" <bounce-ciocang@ieoc.com> wrote:

    image Brian McGahan:

    Kernel panic was probably from booting them all at once and the disk I/O.  The way I found to fix this is just you need to stagger rebooting them, like reboot only 5 at a time instead of all 20.

    Yes Brian, I guess that's the way to do it. Even starting/rebooting 5 at a time it won't take more than 15 minutes to have all of them up and ready.

    Another question raises now. How many physical NICs a server should have for these 20 routers and how they'll be connected to the switches? Or is still not decided yet?

    Gabriel




    INE - The Industry Leader in CCIE Preparation


    http://www.INE.com



  • For which setup?

    Brian McGahan, 4 x CCIE #8593 (R&S/SP/SC/DC), CCDE #2013::13
    bmcgahan@INE.com
     
    Internetwork Expert, Inc.
    http://www.INE.com

    On Apr 18, 2014, at 7:45 PM, "qqabdal" <bounce-qqabdal@ieoc.com> wrote:

    image Brian McGahan:
    I would start with checking to see how many CSR1000v instances you can boot with your current server.

    Thanks Brian, which hardware are you currently using for that setup?




    INE - The Industry Leader in CCIE Preparation


    http://www.INE.com



  • Hi Brian,

    For the Dual 8 Core Xeon E5 with 384GB of RAM. Is this UCS? IBM? Dell?

    Just curious.

     

  • Supermicro. You can see the part numbers in the screenshot. 

    Brian McGahan, 4 x CCIE #8593 (R&S/SP/SC/DC), CCDE #2013::13
    bmcgahan@INE.com
     
    Internetwork Expert, Inc.
    http://www.INE.com

    On Apr 18, 2014, at 8:14 PM, "qqabdal" <bounce-qqabdal@ieoc.com> wrote:

    Hi Brian,

    For the Dual 8 Core Xeon E5 with 384GB of RAM. Is this UCS? IBM? Dell?

    Just curious.

     




    INE - The Industry Leader in CCIE Preparation


    http://www.INE.com



  • Brian - this sounds ideal.   After all, a supermicro with a lot of ram etc. isn't cheap (especially when you already have a rack).

    Could you do a blog post regarding setting up the virtual lab/phys switch, to be used with your new WB?  I'm not an esx guy, but if there were decent instructions and I could it with say, a 64gb system, then I think it would be worthwhile.  As I'm sure would many others?

    cheers

  • This is the plan.  It'll be after Cisco Live which is coming up in a few weeks.

  • Brian - this sounds ideal.   After all, a supermicro with a lot of ram etc. isn't cheap (especially when you already have a rack).

    Could you do a blog post regarding setting up the virtual lab/phys switch, to be used with your new WB?  I'm not an esx guy, but if there were decent instructions and I could it with say, a 64gb system, then I think it would be worthwhile.  As I'm sure would many others?

    cheers

    Also if you need more routers you could integrate GNS3 into your physical setup and run the 15.X image of 7200s.

  • Hmm.  That's also a good idea.  I haven't really touched virtualisation but have previously setup dynamips a few times.  What I don't really want to do is get sidetracked too much trying to get up to speed with the virtual stuff, when ultimately it's a means to an end.  I guess I need to see how I can get gns to co-exist with my phys lab.   I will check to see if you guys have an previous blog posts.

    cheers

  • What I don't really want to do is get sidetracked too much trying to get up to speed with the virtual stuff, when ultimately it's a means to an end.

    Then just rent rack time for anything you can't do on your topology.  For now I would just keep working on the relevant v4 content, and then when our v5 content is released along with the new rack rental system you can decide if you want to try to build it on your own or use our system.  At least in the meantime don't get sidetracked in your studies.

  • I purchased a pre-built "workstation" on eBay for less than $900. Here are the specs:

    2x Quad core Xeons @ 3.14 Ghz (8 cores)

    64GB DDR2 RAM (expandable up to 128GB)

    2x Sata drives with RAID controller

    2x Gig nics

    nVidia Quadro FX 4600 Graphic Card  (not that I really care about the video card anyway)

     

    http://www.ebay.com/itm/380797319099?ssPageName=STRK:MEWNX:IT&_trksid=p3984.m1439.l2649

    I don't need a rack as this is a workstation. This has turned out to be very convenient for me.

  • Hi Brian, could you provide some info on the INE real gear approach for full config and TS labs?


    30+ C1921 connected to 3560X 48-port switches? Maybe throw in some serial interfaces to connect a few routers to a central ISP?

    Did you decide on 15.4T as opposed to 15.3T to future-proof the lab setup?

    Concerning the switches, do you consider the 3850/3650 platforms running IOS-XE with 15.0(1)EZ2 an equivalent alternative to 15.0SE? In terms of QoS, that would also replace legacy mls config with MQC.

Sign In or Register to comment.