VMworld: Architecting Storage DRS Clusters, INF-STO1545

Speakers: Frank Denneman, Valetin Hamburger

This was a great session covering the ins and outs of storage DRS. SRDS is a great feature of vSphere 5.0, with a couple of minor tweaks for vSphere 5.1. But understanding how it works, when to use it, and when not to are very important.

  • Why storage DRS? Resource aggregation, initial placement, datastore maintenance mode, load balancing, affinity rules.
  • Resource Aggregation
    • Simplifies storage management
    • Single I/O and capacity pool for initial placement
    • Datastores added or removed transparently
    • Storage DRS settings are configured at datastore cluster level
  • Initial Placement
    • Select on space utilization and I/O Load
  • Maintenance Mode
    • Evacuates all VMs and VMDKs
    • Compared to host maintenance mode
  • Load Balancing – Most popular feature
    • Triggers on space usage and latency threshold
    • 80% space utilization and 15ms I/O latency
    • Space balancing is always on
    • I/O workload can be disabled
    • Manual or fully automated mode
    • Triggered every 8 hours – Uses 16 hours of performance data.
    • VM migrations can happen once a day
    • SDRS will do cost/benefit analysis of a move
  • Affinity Rules
    • Intra-VM VMDK affinity – Keep VM’s VMDKs on same datastore
    • VMDK anti-affinity – Keep VM’s VMDKs on different datastores. Can be used for separating log and data disks of a VM.
    • VM anti-affinity – Keep VMs on different datastores. Maximize availability of a set of redundant VMs.
  • vCloud Director 5.1 is compatible with SDRS
  • DRS cluster can connect to multiple datastore clusters
  • Datastore cluster can connec to multiple DRS clusters
  • SDRS does NOT leverage Unified vMotion (no shared storage)
  • A datastore cluster can contain datastores from different arrays
    • But cannot leverage VAAI, so storage vMotion will take longer
    • Used mostly for storage array migrations
  • Can’t mix NFS and VMFS datastores in a cluster
  • Strongly recommend use VMFS 5 (unified block size)
  • Don’t upgrade VMFS datastores from 3.x to 5.x. Format LUN from scratch for consistent block size.
  • Recommend same sized datastores for datastore clusters. Multiple sizes can work, but not a good idea.
  • Big datastores are a large failure domain
  • SIOC is not supported on extents, so SDRS cannot I/O load balance.
  • SDRS is thin provisioned VM aware and cost calculations incorporates actual space used
  • SDRS looks at growth rate of thin provisioned and adds that to the calculation
  • Datastore defragmentation – Pre-requisite move can take place to optimize VM placement
  • vSphere 5.1 – Advanced option to keep VMDKs together
  • Cannot mix datastores with different storage capabilities (SSD, FC, SATA). Not prohibited, but don’t do it.
  • Use storage profiles to identify performance/SLA/location
  • What about array based tiering?
    • SIOC injector opens random blocks and may not get accurate info
    • Device modeling can be thrown completely by array based tiering
    • Set datastore clusters to manual I/O load balancing, or totally disable
Print Friendly, PDF & Email

Related Posts

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments