VSP2247: 10Gb and FCoE Real World Design

This was an excellent fire hose of a session..far too much data to try and write down. Highlights of this session include:

  • Ten 1Gb links does not equal a single 10Gb link. With bursting you can more effectively us a single 10Gb link.
  • With vSphere 5.0 vMotion saturate two 10Gbps links with 8 concurrent vMotions
  • vSphere 5.0 supports multi-NIC vMotion but this requires multiple vmkernel ports, not just bonding two 10Gb NICS in a vSwitch.
  • In vSphere 5 NetIOC can use real QoS tags
  • vSphere 5.0 supports LLDP which helps you map vNICS with HP FlexFabric ports
  • Speaker’s firm did a lot of analysis of 15,000 workloads and on average a single VM only averages 5Mbps of traffic, including IP storage. Of course some traffic will burst, but VM network usage is on average very low.
  • Manufacturers use two primary means to manage bandwidth: Throttle (HP FlexFabric) and Traffic Priority (Cisco UCS and N1K). Traffic priority is more elastic and flexible.
  • Use tools like NetPerf to do network load testing
  • Enabling link failure detection is very important. HP uses SmartLink and Cisco uses Link state tracking.
  • Major design recommendations
    • Always check the HCL. Extremely important as not all HW or drivers are on the HCL. A must!
    • Use the vmxnet3 NIC
    • Use Twinax cables, but there are some interop issues (Cisco and HP) that you need to watch out for.
    • Jumbo frames should only be used for special use cases, as it makes traffic shaping harder. Jumbo frames can increase vMotion traffic from 15Gbps to 20Gbps, but do you really need to?
    • Turn spanning tree port fast on
    • Leverage VLAN tagging within vSphere
    • Configure Multi-NIC vMotion in vSphere 5.0
    • NEVER trust written documentation. There is a lot of wrong information out there. Do you own testing and analysis.
  • HP FlexFabric recommendations
    • Use LACP etherchannel on northbound switches
    • If using the N1Kv, leverage HP FlexNIC throttling, not N1K traffic prioritization as the FlexFabric hardware DOES NOT honor QoS tags. Shame on HP!
    • Management traffic: 500 Kbps, VMs – 2.5Gbps, vMotion – 3Gbps, Storage 4Gbs
    • FlexNICs only do outbound trafic throttling..not inbound. Shame on HP.
    • In FlexFabric make sure to enable the virtualization of the UUID (serial #)
    • Install Insight Control for vCenter for good network and storage discovery
  • Cisco UCS
    • Configure HW QoS on the NIC adapters
    • Fully configure QoS (end to end)
    • Enable CDP
    • In v1.4 template management is greatly improved and works very well (single template for all ESXi servers)
    • If using jumbo frames set a packet size of 9216, not 9000, to allow for VLAN tag data
  • Nexus 1000v
    • Use multiple VSMs, or better yet, the Nexus 1010. Some package deals really bring down the N1010 price.
    • Use MAC pinning
    • Be sure to configure system VLANs for critical VLANs that need to be present when ESXi boots

Whew! The speaker talked 100MPH, and also said his slide deck has a lot of backup slides that go through detailed HP FlexFabric and Cisco UCS configuration.

Print Friendly, PDF & Email

Related Posts

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments