UCS iSCSI Boot From SAN

Hi folks,

Although not on the CCIE blueprint, recently I've done a couple of UCS implementations with iSCSI SAN Boot.  Both have gone fine no problem, but you do have to have your iSCSI VLAN set as native otherwise it does not work..

Now in the two I've done this for have not been a problem, although I can foresee for some clients, this could be an issue (especially if the network teams have different standards etc to the server teams).

One way round this would be to use LAN pin groups and associating the pin group to an 'iSCSI Boot' vNIC template, and ensuring the correct config on the northbound switch ports (in respect to Native VLAN) marry up with the Native VLAN set on the vNIC template.

One thing I have not yet tested, is whether you can split the native VLAN at the global level in UCS when using LAN PIN groups?

Rgds

Dominic

Comments

  • recently I've done a couple of UCS implementations with iSCSI SAN Boot.

    Hi Dominis, I am not answering your question here, but rather asking a question [:P]

    I am wondering what was the design decision for choosing iSCSI over FCoE, considering FCoE performance is much better than iSCSI.

    http://blogs.cisco.com/datacenter/fcoe-versus-iscsi-the-mystery-is-solved/

    Thanks for sharing [Y]

  • Hi Alexander,

    You make a good point, but in the two instances I have been involved with we were not able to use FC/FCoE.

    1)  The customer was very unfamiliar with FC so decided on iSCSI

    2)  Because we needed to attach a backup device for NDMP directly to the storage controllers, we didn't have enough FC ports on the controllers to make some of them 'targets' (for SAN boot) and others 'initiators' (for backup purposes).  We could have split the FC ports on the controllers, but then we would have lost redundancy, as on each controller we would only have had a single initiator and a single target.  Hence iSCSI SAN boot + full backup & redundancy.  Also, as these were ESXi hosts, and iSCSI was just for SAN boot purposes, performance was not really an issue.

    Thanks

    Dominic

  • Hi Dominic,

     

    Few things:

    Firstly, iSCSI is on both the written and the lab blueprints - first under the Storage Networking and then under UCS with 'boot from remote storage'.

    Secondly, while correct that you do need to mark the Overlay vNIC as Native for the VLAN being used by iSCSI, that's the end of the line where it needs to be native. It does not need to be native from the FI northbound. It's the same as a classic server sending regular Ethernet frames into a switch where an access VLAN is defined, and then the switch adds a dot1q header and ships it north to another switch. In this case, the FI being the switch, will add a dot1q header and ship it northbound. See this workbook lab as an example.

    HTH.

  • Hi Mark,

    Many thanks.  I knew iSCSi was on blueprint, just didn't realise remote boot using iSCSI was.

    Have looked at the workbook lab and very useful.  Many thanks.  I shall test and perform on next install, as last time only way I could get it to work was to make iSCSI VLAN the native one across the board, so must have had something incorrect.

    Rgds

    Dominic

  • Let me know when you have a chance to lab it up again and I'd be happy to assist with any questions or TS.

  • 1)  The customer was very unfamiliar with FC so decided on iSCSI

    2)  Because we needed to attach a backup device for NDMP directly to the storage controllers, we didn't have enough FC ports on the controllers to make some of them 'targets' (for SAN boot) and others 'initiators' (for backup purposes).  We could have split the FC ports on the controllers, but then we would have lost redundancy, as on each controller we would only have had a single initiator and a single target.  Hence iSCSI SAN boot + full backup & redundancy.  Also, as these were ESXi hosts, and iSCSI was just for SAN boot purposes, performance was not really an issue.

    Thanks Dominic for sharing your experience. [B]

Sign In or Register to comment.