TechEd 2014: Converged Networking for Hyper-V

Breakout session: DCIM-B378

This was a great session which covered a multitude of NIC features (VMQ, RSS, SR-IOV, etc.), when to use them (or not), which features are compatible with each other, and other Hyper-V networking topics.

Historical topology for Hyper-V was discreet NICs for different traffic types (management, storage, migration, cluster, VMs, etc.). This resulted in a lot of physical NICs, and it got out of control. Now, we assume two 10Gb NICs with virtual networks all over the same physical interfaces. You can also have a converged topology using RDMA for high-speed, low latency requirements. Also, SR-IOV can be used for specific VMs for fast, low latency guest networking.

Demands on the network: throughput, latency, inbound, outbound, north/south, east/west, availability

NIC Teaming (In the host OS)

  • Grouping of 1 or more NICs to aggregate traffic and enable failover
  • Why use it? Better bandwidth utilization
  • Doesn’t get along with SR-IOV or RDMA
  • Recommended mode: switch independent teaming with dynamic load distribution
  • Managed via PowerShell (netlbfo cmdlets)
  • *-netadapter
  • You can create a NIC team in VMM, and there’s a wizard to create the NIC team

NIC Teaming (In the guest OS)

  • Why use it? Better bandwidth utilization
  • Loss of NIC or NIC cable doesn’t cut off communications in the guest
  • Provides failure protection in a guest for SR-IOV NICs
  • set-vmnetworkadapter

VMQ, RSS and vRSS

  • What is it? Different ways to spread traffic processing across multiple processors
  • RSS for host NICs (vNICs) and SR-IOV Virtual Functions (VFs)
  • VMQ for guest NICs (vmNICs)
  • Why use it? Multiple processors are better than one processor. vRSS provides near line rate to a VM on existing hardware.
  • RSS and VMQ work with all other NIC features but are mutually exclusive
  • Get-netadaptervmq to see how many hardware queues your hardware has
  • VMQ should always be left on (it is by default)
  • get-adapterrss from the guest

Large Send Offload

  • Allows a NIC to segment a packet for you and saves host CPU
  • LSO gets along with all Windows features and is enabled by default

Jumbo Frames

  • Way to send a large packet on the wire
  • Must be aware of end to end MTU
  • Reduces packet processing overhead
  • Gets along with all other Windows networking features
  • Use it for SMB, Live migration, iSCSI traffic..they will all benefit
  • ping -l 9014  and see if it succeeds or fails (use do not fragment flag too)
  • Must set the size on both the hyper-v host level and within the guest
  • Virtual switch will detect jumbo frames and doesn’t need manual configuration

SR-IOV

  • Highly efficient and low latency networking
  • Can see 39 Gbps performance over a single 40 Gbps NIC
  • Doesn’t play with NIC teaming (host), but does work with guest NIC teaming
  • ACLs, VM-QoS will prevent SR-IOV from being used
  • Should only be used in trusted VMs
  • Can’t have more VMs than NIC VFs (virtual function)/vPorts
  • The NIC can only support a single VLAN and MAC address

Demands on the Network

  • Bandwidth management – Live migration can saturate a 10Gb/40Gb/80Gb NIC

Quality of Service

  • Hardware QoS and software QoS cannot be used at the same time on the same NIC
  • Software: to manage bandwidth allocation per VM or vNIC
  • Hardware: To ensure storage and data traffic play well together
  • QoS can’t be used with SR-IOV
  • Once a Hyper-V switch is configured for QoS you can’t change the mode (weight, absolute bandwidth). Weights are better than absolute.

Live Migration

  • Microsoft’s vMotion
  • Three transport options: TCP, compression, SMB
  • SMB enables multiple interfaces (SMB multi-channel) and reduced CPU with SMB direct
  • Gets along with all Windows networking features but can be a bandwidth hog
  • 4-8 is a good number for concurrent migrations (default is 2)

Storage Migration

  • Microsoft’s storage vMotion
  • Traffic flows through the Hyper-V host
  • 4 concurrent migration is the default and recommended number

SMB Bandwidth Limits

  • Quality of service for SMB
  • Enables management of the three SMB traffic types: Live migration, provisioning, VM disk traffic
  • Works with SMB multi-channel, SMB direct, RDMA
  • Able to specify bandwidth independently for each of the three types via powershell

Summary

Converged networking falls apart if you don’t manage the bandwidth. Implement QoS! Don’t just throw bits on the wire and “hope” that everything will be fine, as it probably won’t be when you start having network contention.

 

 

 

 

 

Print Friendly, PDF & Email

Related Posts

Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments