VMworld 2016: Extreme Performance: DRS

Session INF8959

300,000 vCenter deployments, 94% with DRS enabled

Quick Facts

  • Faster power on for large clusters: 6.5 is 3x faster than 5.5 and 6.0.
  • 5x lower CPU utilization in 6.5 than previous versions
  • 6.5 has better VM placements

DRS ensures resource availability – DRS does this in two ways

  • Effective initial placement – Use the right host
  • Efficient load balancing – Moving VMs to different hosts
  • DRS collects 20 VM performance metrics and 5 host metrics – CPU ready time, memory swapped, memory active, shared memory, CPU used max, CPU used average
  • Application performance is the primary goal of DRS

Factors Impacting DRS Performance

  • Migration threshold – Left makes it less aggressive
  • Rules — Too many rules may prevent DRS balancing the cluster
  • Reservations, limits, shares – Do not set reservations unless absolutely necessary
  • VM Overrides – Custom DRS settings for a VM.

DRS Faults

  • When DRS tries to fix something, but can’t.

DRS Performance Case Studies

  • Case 1:  How does DRS react to spikes in workload? DRS reacts to spikes and will move loads.
  • Case 2: Does DRS prefer moving heavy or light VMs? DRS prefers to move medium workloads to restore balance faster.
  • Case 3: Why is memory utilization not balanced? DRS considers active memory+ 25% of idle memory.  It will not perfectly “balance” memory across all hosts.

Observations

  • Always right size your VMs
  • Occasional swapping is not bad, constant swapping is bad

 

VMworld 2015: DRS Advancements in vSphere 6.0

Session INF5306

DRS is the #1 scheduler in the datacenter today

92% of clusters have DRS enabled. 79% are in fully automated mode. 87% have affinity and anti-affinity rules.

43% of clusters have resource pools enabled and use them

99.8% of cluster use maintenance mode

Bottom line: DRS is popular

DRS collects innumerable stats every 20 seconds for its calculations

  • CPU Reserved
  • Memory reserved
  • CPU active, run and peak
  • memory overhead, growth-rate
  • Active, consumed and idle memory
  • Shared memory pages, balloon, swapped, etc.
  • VM happiness is the most important metric (if demands/entitlementws are always met, then VM is ‘happy’)

Constraints for initial placement and load balancing

  • Constraints are a big part of decision making
  • HA admission control policies
  • Affinity and anti-affinity rules
  • # concurrent vMotions
  • Time to complete vMotion
  • Datastore connectivity
  • vCPU to pCPU ratio
  • Reservations, limits and share settings
  • Agent VMs
  • Special VMs (SMP-FT, vFlash, etc.)

Cost Benefit and minGoodness

  • Cost-benefit analysis – VM happiness is evaluated against the cost of a migration
  • Cost considerations: per vMotion of 30% CPU core for 1Gb and 100% of a core for 10Gb; Memory consumption of ‘shadow VM’ at the destination host
  • Benefit considerations: Positive performance benefit to VMs at the source host, overall workload distribution has to be much better
  • Each analysis results in a rating from -2 to +2
  • MinGoodness (migration threshold slider) is -2 to +2. User can set this.

Takeaway

  • VM happiness is the #1 influence
  • Influenced by real time stats, constraints and cost/benefit analysis
  • A small imbalance should not be a concern
  • Default setting of DRS aggressiveness is best

New Features in vSphere 6.0

  • Network-aware DRS – ability to specify bandwidth reservation for important VMs
  • Initial placement based on VM bandwidth reservation
  • Automatic remediation in response to reservation violations due to pNIC saturation, pNIC failure
  • Tight integration with the vMotion team and will do a unified recommendation for cross-vCenter vMotion
  • Runs a combined DRS and SDRS algorithm to generate a tuple (host, DS)
  • CPU, memory, and network reservations are considered as part of admission control
  • All the constraints are respected as part of the placement
  • VM-to-VM affinity and anti-affinity rules are carried over during cross-cluster and cross-vCenter migration
  • Initial placement enforces the affinity and anti-affinity constraints
  • Improved overhead computation – greatly improves the consolidation during power-on

Cluster Scale and Performance Improvements

  • Increased cluster capacity to 64 hosts and 8K VMs
  • DRS and HA extensively tested at maximum scale for VCSA and Windows
  • Up to 66% performance increase in vCenter (power on, DRS calcs, etc.)
  • VM power-on latency has reduced by 25%
  • vMotion operation is 60% faster
  • Faster host maintenance mode

Extensive Algorithm Usage

  • DRS is the lynchpin of the SDDC vision
  • vSphere HA
  • VUM
  • vCloud Director
  • vCloud Air
  • Fault Tolerance
  • ESX Agent Manager

Best Practices

  • Tip #1: Full storage connectivity
  • Tip #2: Power management settings – Set BIOS to OS control and vSphere to balanced.
  • Tip #3: Threshold setting – Default of 3 works great.
  • Tip #4: Automation level – Fully automated is best choice
  • Tip #5: Beware of resource pool priority inversion. Make sure that cramming more VMs won’t dilute the shares.
  • Tip #6: Avoid setting CPU-affinity

Future Directions

Proactive HA

  • Proactive evacuation of VMs based on hardware health metrics
  • Partnering with hardware vendors to integrate and certify
  • Moderately degraded mode and severely degraded modes
  • VI admin can configure the DRS action for each health state event
  • Host maintenance mode and host quarantine mode
  • VI admin can filter events

Network DRS v2

  • Take pNIC saturation into account
  • Tighter integration with NSX
  • Ensure mice and elephant flow doesn’t share same network path
  • Network layout topology – leverage topology for availability and performance optimizations

Proactive DRS

  • Tighter integration with VRops analytics engine
  • Periodic and seasonality demands incorporated into decision making

What-if Analysis

  • A sandbox tab in UI to run ‘what if’ analysis
  • VM availability assessment by simulating host failures
  • Cluster over commitment during maintenance window

Auto-scale of VMs

  • Horizontal and vertical scaling to maintain end-to-end SLA guarantees
  • Spin-up and spin-down VMs based on workload
  • Will first be offered as a service in vCloud air
  • Increase CPU and memory resources to meet performance goals
  • CPU/memory hot add is an additional option for DB tier

Hybrid DRS

  • Make vCloud-air a seamless extension of enterprise datacenter capacity through policy based scheduling

 

 

VMworld 2015: 5 Functions of SW Defined Availability

Session: INF4535

Duncan Epping, Frank Denneman

Introduction to SDA (Software defined availability): VM, server, storage, data center, networking, management. Business only cares about the application, not the underlying infrastructure.

vSphere HA

  • Configured through vCenter but not dependent on it
  • Each host has an agent (FDM) will be installed for monitoring state
  • HA restarts VMs when failure impacts those VMs
  • Heartbeats via network and storage to communicate availability
  • Can use management network or VSAN network if VSAN is enabled
  • Need spare resources
  • Admission control – Allows you to reserve resources in case of a host failure
  • Admission control guarantees VM receives their reserved resources after a restart, but does not guarantee that VMs perform well after a restart.
  • Best practices: Select policy that best meets your needs, enable DRS, simulate failures to test performance
  • Percentage based is by far the most used and is Duncan recommended
  • Duncan went through various failure scenarios (host failure, host isolation, storage failure) and how HA restarts the VMs.
  • Use VMCP (new in 6.0) [VM component protection]. Helps protects against storage connectivity loss.
  • Generic recommendations: disable “host monitoring”; make sure you have redundant management network; enable portfast; use admission control

DRS

  • DRS provides load balancing and initial placement
  • DRS is the broker of resources between producers and consumers
  • DRS goal is to provide the resources the VM demands
  • DRS provides cluster management (maintenance mode, affinity/anti-affinity rules)
  • DRS keeps VM’s happy, it doesn’t perfectly balance each host
  • DRS affinity rules: Control the placement of VMs on hosts within a cluster.
  • DRS highest priority is to solve any violation of affinity rules.
  • VM-host groups configureable in mandatory (must-rule) or preferential (anti-)affinity rules (should-rule)
  • A mandatory (must) rule limits HA, DRS and the user
  • Why use resource pools? Powerful abstraction for managing a group of VMs. Set business requirements on a resource pool.
  • Bottom line is resource pools are complex, and VMs may not get the resources you think they should. Only use them when needed.
  • Try to keep the affinity rules as low as possible. Attempt to use preferential rules.
  • Tweak aggressiveness slider if cluster is unbalanced.

SDRS and SIOC

  • Storage IO control is not cluster aware, it is focused on storage
  • Enabled at the datastore level
  • Detects congestion and monitors average IO latency for a datastore
  • Latency above a particular threshold indicates congestion
  • SIOC throttles IOs once congestion is detected
  • Control IOs issued per host
  • Based on VMs shares, reservations, and limits
  • SDRS runs every 8 hours and checks balance, and looks at previous 16 hours for 90th percentile
  • Capacity threshold per datastore
  • I/O metric threshold per datastore
  • Affinity rules are available
  • SDRS is now aware of storage capabilities through VASA 2.0 (array thin provisioning, dedupe, auto-tiering, snapshot)
  • SDRS integrated with SRM
  • Full vSphere replication full support

vMotion

  • Migrate live VM to a new compute resource
  • vSphere 6.0: cross vCenter vMotion, long-distance vMotion, vMotion to cloud
  • May not realize it, but lots of innovation and new features here since its introduction in 2003
  • Long distance vMotion supports up to 150ms. No WAN acceleration needed.
  • vMotion anywhere: vMotion cross-vCenters, vMotion across hosts without shared storage, easily move VMs across DVS, folders and datacenters.

vSphere Network IO Control

  • Outbound QoS
  • Allows you to partition network resources
  • Uses resource pools to differentiate between traffic types (VM, NFS, vMotion, etc.)
  • Bandwidth allocation: Shares and reservations. NIOC v3 allows configuration of bandwidth requirements for individual VMs
  • DRS is aware of network reservations as well.
  • Bandwidth admission control in HA
  • Set reservations to guarantee minimum amount of bandwidth for performance of critical network traffic. Sparingly use VM level reservations.

 

VMworld 2014: vSphere HA Best Practices and FT Preview

Session BCO2701. This was very fast paced, and I missed jotting down a lot of the slide content. If you attended VMworld then I recommend you listen to the recording to get all of the good bits of information.

vSphere HA – what’s new in 5.5

  • VSAN Support
  • AppHA integration

What is HA? Protects against 3 failures:

  • Host failures, VM crashes
  • host network isolated and datastore incurs PDL
  • Guest OS hangs/crashes and application hangs/crashes

Best Practices for Networking and Storage

  • Redundant HA network
  • Fewest possible hops
  • Consistent portgroup names and network labels
  • Route based on originating port ID
  • Failback policy = no
  • Enable PortFast
  • MTU size the same

Networking Recommendations

  • Disable host monitoring if network maintenanceis going on
  • vmkinics for vSphere HA on separate subnets
  • Specify additional network isolation addresses
  • Each host can communicate with all other hosts

Storage Recommendations

  • Storage Heartbaeats – All hosts in the cluster should see the same datastores

Best Practices for HA and VSAN

  • Heartbeat datastores are not necessary in a VSAN cluster
  • Add a non-VSAN datastore to cluster hosts if VM MAC address collisions on the VM network are a significant concern
  • Choose a datastore that is fault isolated from VSAN network
  • Isolation address – use the default gateways for the VSAN networks
  • Each VSAN network should be on a unique subnet

vSphere HA Admission Control

  • Select the appropriate admission control policy
  • Enable DRS to maximize likelihood that Vm resource demands are met
  • Simulate failures to test and assess performance
  • Use the impact assessment fling
  • Percentage based is often the best choice but need to recalculate when hosts are added/removed

Tech Previews of  FT

  • FT will support up to v 4CPUs and 64GB of RAM per VM
  • FT now uses separate storage for the primary and secondary VMs
  • New FT method does not keep CPUs in lock step, but relies on super fast check pointing

Tech Preview HA

  • VM Component protection for storage is a new feature
  • Problem: Detects APD and PDL situation
  • Solution: Restarts affected VMs on unaffected hosts
  • Shows a GUI with options for what you want to protect against

Tech Preview of Admission control fling

  • Assesses the impact of losing a host
  • Provides sample scenarios to simulate

 

VMworld 2013: vSphere 5.5 DRS New Features, Future Directions

image_thumb1This is my first technical session of VMworld 2013, VSVC5280. It covered new vSphere 5.5 DRS features, explained why DRS may not always “perfectly” balance your cluster. vSphere 5.5 has a lot of new storage features, and DRS has been enhanced to be aware of VSAN and vFlash technologies. As always during conferences I’m real time blogging, so I may not have caught all of the details and don’t have time to expound on the new features. Stay tuned for future blog posts on vSphere 5.5 goodness.

Session Outline:

  • Advanced resource mgt concepts
  • New features in vSphere 5.5
  • Integrate with new storage and networking features
  • Future directions

Advanced Resource Management Concepts

  • DRS treats the cluster as one giant host
  • Capacity of this “host” = capacity of the cluster
  • Main issue: Fragmentation of resource across hosts
  • Primary goal: Keep VMs happy by meeting their resource demands
  • Why meet VM demand as the primary goal? VM demand satisfied  makes VM or applications happy
  • Why is this not met by default? Host overload
  • Three ways to find more cluster capacity: Reclaim resources, migrate VMs, Power on a host (if DPM enabled)
  • Demand does NOT equal current utilization
  • Why not load balance the DRS cluster as the primary goal? Load balancing is NOT free. Movement has a cost.
  • Load balancing is a mechanism used to meet VM demands. If VM resources are being met, don’t move the VM.

DRS Load-Balancing: The balls and the Bins Problem

  • Problem: Assign n balls to m bins
  • Key challenges: Dynamic numbers and sizes of bins/balls
  • Constraints of on co-location, placement and others
  • VM resource entitlements are the “balls” and Host resources at the “bins”
  • Dynamic load, dynamic capacity

Goals of DRS Load-Balancing

  • Fairly distribute VM demand
  • Enforce constraints
  • Recommend moves that improve balance
  • Recommend moves with long term benefits

The Myth of Target Balance

  • UI slider tells us which star threshold is acceptable
  • Implicit target number of cluster imbalance metric
  • Constraints can make it hard to meet target balance
  • If all your VMs are happy, a little imbalance is FINE

vSphere 5.5 New Features

  • In vSphere 5.1 there is a new option: LimitVMsPerESXhost  – DRS will not admit or migrate more than x VMs to any host
  • In vSphere 5.5 LimitVMsPerESXHostPercent
  • Limit VMs per host limit: Example: Mean = 8, Buffer % 50, new limit is 12 (50% * 8)
  • New: Latency-sensitive VMs and DRS. New GUI pulldown option
  • Magical “soft affinity” rule to the current host, for workloads that may be sensitive to vMotions

CPU Ready Time Overview

  • Amount of time the vCPU waits to run but before it can be scheduled on a pCPU. Measures CPU contention.
  • Many causes for high ready time, many fixes, many red-herrings
  • “%RDY is 80, that can NOT be good”: Cumulative number, so divide by number of cores.
  • Rule of thumb: Up to 5% per vCPU is usually alright
  • “Host utilization is very low, %RDY is very high”: Host power management reduces pCPU capacity
  • Set BIOS option to “OS control” and let ESX decide
  • Low VM or RP CPU limit values restrict cycles delivered to VMs. “Set your VMs free” by not configuring MHz limits
  • NUMA Scheduling effects: NUMA scheduler can increase %RDY time
  • Application performance is often better because NUMA scheduler optimizes memory performance

Better Handling in DRS

  • vSphere 5.5 new feature: AggressiveCPUActive=1
  • Only use for very spiky workloads that the 5 minute average may not catch
  • vSphere 5.5: PercentIndleMBInMemDemand – Handles memory bursting protection

New Storage and Networking Features

  • vFlash: Initial DRS placement just works
  • DRS load balancing treats VMs as it soft-affinity to current host
  • VMs will not migrate unless absolutely necessary
  • Host maintenance mode may take longer
  • vFlash space can get fragemented across the cluster
  • vMotions may take longer

VSAN interop with SRS

  • DRS is completely compatible with VSAN

Autoscaling Proxy Switch Ports and DRS

  • DRS admission control – proxy switch port test
  • Makes sure host has enough ports on proxy switch: vNIC ports, uplink pots, vmkernel ports
  • In vSphere 5.1 per ports per host = 4096
  • Host will power on no more than 400 VMs
  • New to vSphere 5.5: Autoscaling switch ports

Future Directions

  • Per-vNIC bandwidth reservations
  • pNIC capacity at host will be pooled together
  • Static overhead memory – influenced by VM Config parameters, VMware features, ESXi build number
  • Overhead memory: This value is used in admission control and DRS load balancing
  • Better estimation of these numbers leads to higher consolidation during power-on

Proactive DRS – Possible future features

  • Lifecycle: Monitor, predict, remediate, compute, evaluate
  • Imagine a vMotion happening before a workload spike
  • Predict and avoid spikes
  • make remediation cheaper
  • Proactive load-balancing
  • Proactive DPM – Power on hosts before capacity is needed
  • use predicted data for placement, evacuation
  • vCOPS integration to perform analytics and capacity planning
© 2017 - Sitemap