Use a break-out switch to connect CSR1000v to physical switches

I had being trying to connect my physical switches (4x3560 catalysts) using a break-out switch (4948) to the CSR 1000v instances on vmware without any success.

I had successfully on previous occasions connected GNS3 running on CentOS to physical switches via a breakout switch.

 

FOllowing is my setup:

on break_out switch: connecting to the VM_Host

interface GigabitEthernet1/24

 switchport trunk encapsulation dot1q

 switchport trunk allowed vlan 101-104

 switchport mode trunk

 l2protocol-tunnel cdp

 l2protocol-tunnel stp

 l2protocol-tunnel vtp

 no cdp enable

 

break_sw (g1/1) connected to physical SW_1 (fa0/1)

interface GigabitEthernet1/1

 description R1 > SW1

 switchport access vlan 101

 switchport mode dot1q-tunnel

 l2protocol-tunnel cdp

 l2protocol-tunnel lldp

 l2protocol-tunnel stp

 l2protocol-tunnel vtp

 no cdp enable

 spanning-tree portfast

end

 

Port g1/24 of break_SW is connected to eth0 of VM_Host, i create a virtual port group on the vmhost, set vlan_id as 101, assign this vm_net to the csr_1000v(R1) g3

create a dot1q interface on the router:

int g3.300

encap dot1q 300

ip addr 100.0.0.1 255.255.255.0

 

go to physical switch SW1

create vlan 300

enable trunking on fa0/1

int fa0/1

sw trunk encap dot

sw mode trunk

sw trunk allowed vlan 300

 

int vlan 300

ip addr 100.0.0.11 255.255.255.0

 

Did all these steps, i go to CSR1000v (R1) and tried to ping vlan300 interface of physical switch (SW1) with no luck, I do not see any arp on router no mac addr on vlan 300 on the physical switch.

But i can see cdp from both devices (R1) and SW1.

Am i missing a step? I suppose the issue lies on VM host? Is it even possible to use VMhost this way to connect to a break switch and tunnel vlans?

appreciate any help on this.

 

thanks

Ahmed.

Comments

  • For the v5 topology you dont need a breakout switch. Run a trunk port from SW1 to the ESX server and set that vnic up to allow all vlans. Add that vnic to the vswitch all your 1000v are connect to and you are set. 

  •  

    How can I set the vnic to allow all vlans? 

     

    I set vlan ID to All (4095) on port group properties. I am not sure why my packets are not getting tagged.

     

    I can ping from Switch to CSR1000v on native Vlan, but unable from any other vlan.

     

    Please help!

  • I am having this exact same issue, I've been trying to solve it for several days now.

  • 1. Make sure under your Port Group Properties it has VLAN ID: All (4095)

    2. Enable promiscious mode under vswitch properties > Security tab

    3. Make sure on your switch side you are allowing all vlans (is allowed by default)
         3a. Do you due diligence on the interface. Make sure its a trunk, encap is dot1q, etc. etc.

     

    image

  • Thanks for the info. It turns out that I did configure everything correctly, but I was trying it with vIOS, which for some reason doesn't seem to work properly with the VLANs. From vIOS I can ping from the VM to the physical switch, but only direcly and not as a trunk. However, I tried a CSR1000V with the exact same configuration and was able to ping from the VM to the physical switch over different VLANs.

  • I'm having the same problem with CSR1000v (not vIOS as another poster metioned) Were you able to fix? I found that the symptoms are intermittent and can change if I reboot the host. I've worked through both QinQ and direct trunk configs.

    I have 10 csr1000v routers split between 2 hosts. When I reboot, it seems that I can get the routers to connect between the hosts but it stops working at some point (typically after I begin studying, bleh).

  • Poor attention to detail on my part...I didn't correctly migrate everything from the QinQ setup. I have the full environment working now.

  • 1. Make sure under your Port Group Properties it has VLAN ID: All (4095)

    2. Enable promiscious mode under vswitch properties > Security tab

    3. Make sure on your switch side you are allowing all vlans (is allowed by default)
         3a. Do you due diligence on the interface. Make sure its a trunk, encap is dot1q, etc. etc.

     

    image

     

    I just wanted to take a minute to say thanks!!  I was building a SP topology using GNS3 with a breakout switch (3560 running IOS 15) and I've been messing with it for about 4 hours.  Changing the security to allow Promiscuous Mode did the trick!  You saved me from going out and buying a physical box.

    I was trying something with an IBM x3850 M2 where I'm using one of the NIC's through a vSwitch tied to an Ubuntu VM with GNS3 then doing the cloud connection through a dot1q trunk.  It's a pretty robust type of setup but I'm wanting to run CSR1000V's, XRv's, then of course 7206's for transit.  Most likely I'll configure the XRv's and CSR1000V's as individual vSwitches back to the 3560's/ME3400 but in this particular case when coming off of GNS3 it was a requirement.  Thanks again!

     

    Ethan M.
    CCIE #44000 (R/S)

  • Hi Guys,

    I'm having the same problem. I've used vmnic0 for my telnet access to CSRs through my home network and it works perfectly fine.

    Now I've connected SW1 to vmnic1 and added it to existing VM Network and I can't get connectivity. Here is print screen:

    PrintScreen

    On switch I used standard trunk config and I'm using baremetal server running ESXi as most of you guys here.

    Any suggestions what I'm doing wrong here?

  • I'm having a similar issue. 

    vmnic0 is connected to my computer using 192.168.1.0/24

     host is .1

     computer is .10

    I can access all my csr100v routers without any issues.

    I added vmnic1 to the vswitch that has all the devices - set promiscouous mode for that vmnetwork 

    then when I attach a cable from nic 2 on server to sw1 fa0/1 that has standard dot1q trunk config, I lose connectivity to my vmware host box from my directly connected workstation. 

    Anyone have any idea why this would happen? Any help is much appreciated!

     

    image

    image

     

     

  • Hi,

     

    I have succesfully connected some CSRv to real switches. I'm agree with this is not needed but i want to make QoS tests with MPLS connected via real Catalyst switches.

     

    My setup is with virtual switches connected to virtual NICs in a Ubuntu Server with GSN3 tunneled via Breakout as the CCIEv4 Configuration.

     

    The only thing that i wasn't able to do is tunneling CDP (and other L2 protocols like LLDP or LACP) from the CSRv.

     

    Evidence:

     

    Switch Side

    SW3#sh ip int bri | e unass

    Interface              IP-Address      OK? Method Status                Protocol

    Vlan78                 192.168.78.3    YES manual up                    up      

    Vlan100                192.168.100.3   YES manual up                    up      

    Vlan123                192.168.123.3   YES manual up                    up      

    Loopback0              150.1.9.9       YES NVRAM  up                    up      

    SW3#

    SW3#sh inv

    NAME: "SW3", DESCR: "Cisco Catalyst 3550 24 10/100 baseT ports + 2 Gig uplinks fixed configuration Layer 2/3 Ethernet Switch"

    PID: WS-C3550-24-EMI   , VID: B0 , SN: CHK0611W1AG

     

     

    SW3#

    SW3#sh ip arp vl

    SW3#sh ip arp vlan 123

    Protocol  Address          Age (min)  Hardware Addr   Type   Interface

    Internet  192.168.123.3           -   0009.4376.4980  ARPA   Vlan123

    Internet  192.168.123.7          41   000c.29b7.f971  ARPA   Vlan123

    SW3#

    SW3#ping 192.168.123.7

     

    Type escape sequence to abort.

    Sending 5, 100-byte ICMP Echos to 192.168.123.7, timeout is 2 seconds:

    !!!!!

    Success rate is 100 percent (5/5), round-trip min/avg/max = 4/16/68 ms

    SW3#

    SW3#

    SW3#sh mac add | i 000c.29b7.f971

       1    000c.29b7.f971    DYNAMIC     Fa0/3

     123    000c.29b7.f971    DYNAMIC     Fa0/3

      78    000c.29b7.f971    DYNAMIC     Fa0/3

    SW3#

    SW3#sh cdp n f0/3

    Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge

                      S - Switch, H - Host, I - IGMP, r - Repeater, P - Phone

     

    Device ID        Local Intrfce     Holdtme    Capability  Platform  Port ID

    SW3#

     

    CSRv Side

     

    R7#sh ip arp g1.123

    Protocol  Address          Age (min)  Hardware Addr   Type   Interface

    Internet  192.168.123.7           -   000c.29b7.f971  ARPA   GigabitEthernet1.123

    Internet  192.168.123.3         215   0009.4376.4980  ARPA   GigabitEthernet1.123

    Internet  192.168.123.1          42   c000.0bc2.0000  ARPA   GigabitEthernet1.123

    R7#

    R7#

    R7#sh cdp n 

    Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge

                      S - Switch, H - Host, I - IGMP, r - Repeater, P - Phone, 

                      D - Remote, C - CVTA, M - Two-port Mac Relay 

     

    Device ID        Local Intrfce     Holdtme    Capability  Platform  Port ID

     

    Total cdp entries displayed : 0

    R7#

    R7#sh inv

    NAME: "Chassis", DESCR: "Cisco CSR1000V Chassis"

    PID: CSR1000V          , VID: V00, SN: 99U716ZAV98

     

    NAME: "module R0", DESCR: "Cisco CSR1000V Route Processor"

    PID: CSR1000V          , VID: V00, SN: JAB1303001C

     

    NAME: "module F0", DESCR: "Cisco CSR1000V Embedded Services Processor"

    PID: CSR1000V          , VID:    , SN:            

     

     

    R7#

     

    I have MTU of 1496 accross this topology

     

    R8 is another CSRv

    R8#ping 192.168.78.7 df-bit size 1496

    Type escape sequence to abort.

    Sending 5, 1496-byte ICMP Echos to 192.168.78.7, timeout is 2 seconds:

    Packet sent with the DF bit set

    !!!!!

    Success rate is 100 percent (5/5), round-trip min/avg/max = 7/91/126 ms

    R8#

    R8#ping 192.168.78.7 df-bit size 1497

    Type escape sequence to abort.

    Sending 5, 1497-byte ICMP Echos to 192.168.78.7, timeout is 2 seconds:

    Packet sent with the DF bit set

    .

    Success rate is 0 percent (0/1)

    R8#

     

    Regards

  • so I just got my switches set up today - I had a similar dilemia and thought about what to do.

    my G2 interface on my CSR's are hardcoded to my home LAN subnet - they plug into a wireless bridge and therefore, I can tftp the configs to them, save them on bootflash, and then just reload the blank or manually add the ip again - no biggie.

    I also hardcoded ip's for VLAN 1 on my switches - but here was my issue - my switches plug into the CSR1000v on their G0/1 interface, but the subnet for mangement is G0/2

    I thought and stewed about this a while cause I really wanted to telnet to my switches and I wasn't getting that reachability - I thought about maybe configuring a vlan1 on the router images....but I think the problem was the 192.168.2.x subnet basically wanted to be on one interface for the CSR's and another for the switch configs.

    I stewed on this a while and then just had a genius solution - I just plugged gig 0/2 into the same wireless bridge and hardcoded it for vlan1 access - now both the management port of my CSR and gig 0/2 of SW1 are plugged into the wireless bridge and easily reachable and I can configure any vlan for g0/1, which goes into the server, and ping 155.1.x.x vlans that I configure on the switch across the trunk - keeps management to the switches and the seperate network for the lab

Sign In or Register to comment.