WSV430: Cluster Shared Volumes Reborn in Windows Server 2012

Amitabh Tamhane, Program Manager Microsoft
Vineeth Karinta, NetApp

This was a great session, jam packed with technical details. I had a hard time taking notes with the fire hose of information. The previous version of CSV in Windows Server 2008 R2 had a lot of limitations, and is one big reason why people probably stayed away from Hyper-V. CSV was certainly no VMFS, by a long shot. Really wasn’t even in the same universe. Microsoft realized CVS “1.0” had a lot of shortcomings to put it mildly, and has made huge improvements to make it a real clustered file system. I was quite surprised by the overhaul it has gotten and will make Hyper-V much more usable than previous versions. The session ended 20 minutes early, so they could have gone a bit slower and also would have been great to see real-world performance data to see how it stacks up with VMFS and Windows Server 2008 R2.

For those of you concerned about security, a CSV can now be fully encrypted via BitLocker and all nodes can access the encrypted volumes. If you use modern Intel chips with the AES-NI instruction set, there’s much less encryption/decryption overhead. Couple this with TPM chips that you can now buy in most servers, such as the Cisco B200 M3 and practically all HP servers, and you have an easy to deploy disk encryption solution for your virtual environment.

An interesting feature of CSV, which is not present in VMFS, is the ability to redirect I/O across the network if you have a complete network storage failure, such as both HBAs dying or someone pulling the wrong cables (which should be redundant anyway). Of course you take some performance hit, but the VMs on the server won’t crash and you can then Live Migrate them to another node to restore full performance while you fix the storage connectivity issue on the degraded node.

  • What is Cluster Shared Volume? Layer of abstraction above NTFS. All cluster nodes can read/write to the CSV volume. LUN ownership by node abstracted from application. Application failover without drive ownership changes.
  • CSV has its own name space, so no drive letter limitations
  • CSV was first introduced in Windows Server 2008 R2, and only for Hyper-V workloads
  • Server 2012 expands CSV to support more workloads, such as file server workload
  • Block level I/O redirection, direct I/O for more scenarios to greatly increase performance
  • CSV on top of storage spaces is fully supported
  • Leverages SMB 3.0 and new filesystem features
  • What’s new?
    • Improved interop with anti-virus and backup software
    • Infrastructure for application consistent distributed backups
    • Integrated with new file system features (ODX, spot-fixing for online correction)
    • Significant performance improvements
    • Supports bitlocker encrypted volumes
    • Memory mapped files now supported
    • No longer a dependency on Active Directory for improved performance and resiliency
  • CSV
    • Simultaneous read/write access to a shared SAN LUN
    • Server side metadata sync – Avoids I/O interruptions
    • When do metadata updates occur: VM creation/deletion, VM power on/off, VM mobility, snapshot creation, extending a dynamic VHD, renaming a VHD
  • CSVFS Architecture
    • If the I/O is not a metdata I/O, you get direct I/O performance to the LUN
    • Only is I/O redirected to the coordinator node via SMB when metadata updates are needed or a storage connection failure to the LUN
  • Single NameSpace
    • Consistent view across the cluster
    • Volumes exposed under “ClusterStorage” root directory
    • VolumeX names can be renamed
  • Windows Server 2012 uses standard mount points vs. custom reparse points in Server 2008 R2
    • Delivers better interop with performance counters, SCOM, monitoring free space, etc.
  • CSV proxy file system
    • Filesystem is now shown as CSVFS in storage manager
    • Enables applications to be CSV aware
    • Still NTFS under the covers
  • Simplified CSV Setup
    • Integrated into failover cluster manager storage view
    • CSV integrated into failover cluster core feature
    • 64-node support, up to 4000 VMs
    • Simply right click to add CSV
  • Resiliency
    • CSV provides I/O fault tolerance (transparently handles node, network and HBA failures)
    • CSV virtualizes file handles to applications and failover is transparent to application
    • If connectivity to SAN fails, I/O is redirected over the network to the coordinator node so access can continue
    • Node fault tolerance – New coordinator node is chosen and I/O is not interrupted
    • Metadata updates rerouted to public network if private network connectivity fails
  • Continuously available scale out file server
    • Zero client downtime failover – both planned and unplanned downtime
  • Maximized File System Availability
    • chkdsk/spotfix integration
    • Online volume scanning
    • Zero offline time with CSV
    • Volume offline only to repair, but with virtualized file handles the application does not experience downtime
    • <3 seconds downtime of a volume to fix a volume with 300 million files, down from minutes or hours in server 2008 R2
  • CSV block cache
    • Distributed write-though cache
      • Unbuffered I/O (excluded from Windows cache manager)
      • Consistent across cluster
    • Windows Cache integration
      • Buffered I/O
      • Same as traditional NTFS
    • Huge value for Pooled VDI VM scenario
      • Read-only parent VHD gets cached to reduce storage array I/Os
      • Read-write differencing VHDs
    • 512MB recommended value
  • Redirected I/O less often
    • Direct I/O for more scenarios
    • High performance block level I/O redirection
      • Storage connection broken or not present: Avoids traversing the file system stack twice and directly goes to the storage system (2x performance increase)
  • SMB 3.0 integration supports multi-channel to improve I/O performance
  • SMB client delivers 98% performance of direct attached storage when using multi-channel and RDMA
  • Deployment considerations
    • How many VMs per CSV volume? There is no limit imposed by CSV.
    • VMFS limitations do not apply to CSV (yes MS stated this in their slide)
    • Only limited by the IOPS your storage array can provide
  • Network planning
    • Network path leveraged when metadata updates to files, no storage connectivity
  • CSV Backup Key Wins
    • Distributed application consistent snapshots
    • Parallel backups on same or different CSV
    • Non-disruptive backups (no I/O redirection). CSV ownership does not change during backup
    • Improved I/O performance – Direct I/O mode for software snapshots
    • Improved interop with backup apps and requesters do not need to be CSV aware
    • All VMs running on a CSV can all be snapshot at once and application consistent
    • VSS provider is only called on ‘backup node’
  • NetApp SnapManager CSV demo
    • SnapManager for Hyper-V (SMHV)
      • Policy-based backup and resotre for Hyper-V VMs
      • VMs groups into datasets for ease of backup administration
Print Friendly, PDF & Email

Related Posts

Notify of
Inline Feedbacks
View all comments