vfc on nexus 7k
I have an issue with a setup merging SRIOV, Msoft HyperV + SCVMM + UCS B & FCOE traffic.
Topology: I have an UCS Blade with HyperV as hypervisor hosting win2k12 & Linux guest OS.
I am trying to transport fcoe trafic between guest os and jbod target using a dedicated storage VLAN
Networking: I am using vlan/vsan (id= #10) on ucs mgr and trunked this vsan 10 to virtual
fc interfaces created on Fabric Interconnect,
this vlan is transported using trunk to mds switch going through a couple of nexus 5k & nx7k.
The 7k are passive except vlan 10 switching.
In short my path is :
Guest OS Blade ==> FI-A (vHBA) ===> nx5k (npv mode) ===> nx77k ===> mds1 ===> Jbod#1
There is VMFEX but without specific configuration and Cisco claims it is supported with HyperV
I have configured npv mode on nx5k: vfc interfaces are used to attach to mds and multihop fcoe
is defined, physical interfaces are correct (NP/F mode on nx5k, mds) this have been checked.
My UCS B is sealed with M4 blades & 1340 vic card
Note that I am using dynamic VNIC configuration, but I have checked vNIC creation on my FI ,
there is no issue at this level.
All FLOGI are correctly sent and received ((show flogi database command results in having FCID
& PWWN/WWN correctly displayed).
I have one zone, with all wwn (initiator and target) defined, this zone is configured on both nx5k & MDS
I have tried fcoe before with esxi & DirectPath, no problem
In short the issue is :
Once SRIOV activated, no fcoe trafic can pass anymore between blade and my disk target
There is ery basic FCOE qos defined, with DCBX/PFC, no-drop Queue are basically configured,
Note that we have tested DirectPath before with vmWare without having any issue with the storage
I have checked Dynamic vnic policy and adapter policy attached on UCS Mgr, nothing special (54 max)
I am not an expert, but this is strange, for me SRIOV is compatible with fcoe ?
Does somebody ever try this setup , I suspect a MTU issue but I cannot prove it.
Thanks for help