VVol Technical Overview

Another great session at PEX 2015 by Rawlinson Rivera.

Traditional storage architectures can’t keep up:

  • Specialized and costly HW – Not commodity, low utilization, overprovisioning
  • Device-centric silos – Static classes of service, rigid provisioning, lack of granular control
  • Complex processes – Lack of automation, time consuming processes, slow reaction to request

Hypervisor Enables App-Centric Storage Automation

  • Knows the needs of all apps in real time
  • Sits directly in the I/O path
  • Has global view of all underlying storage systems
  • Can configure storage dynamically
  • Hardware agnostic

Traditional Model – long provisioning cycles, managing LUNs, complex, frequent data migrations

App-Centric Automation – Dynamic delivery of storage services when needed. Fine control of data services at the VM level. Common management across heterogeneous devices.

vSphere Virtual Volumes

  • Virtualizes SAN and NAS devices
  • Virtual disks are natively represented on arrays
  • Enables VM granular storage operations using array-based data services
  • Storage policy-based management enables automated consumption at scale
  • Industry-wide initiative supported by major storage vendors
  • Included with vSphere

What is a VVol?

  • Five types of VVols (objects): Config, Data, MEM, SWAP, Other
  • NO filesystem needed (VMFS is history)
  • Virtual machine objects are stored natively on the array

Storage Container

  • Logical storage constructs for grouping of virtual volumes
  • Typically defined by storage administrators on the array in order to define storage capacity allocation and restrictions
  • Capacity is based on physical storage capacity

Differences between Storage Container and LUNs

  • SC Size based on array capacity
  • Max number of SCs depends on the array
  • Size of SC can be extended
  • LUNs need a filesystem, fixed size mandates more number of LUNs

Protocol End Points

  • Access points that enables communications between ESXi hosts and storage array systems
  • Scope of Protocol End Points: iSCSI, NFS v3, FC, FCoE
  • A protocol endpoint can support any one of the protocols at a given time
  • End points receive SCSI or NFS reads, write commands
  • Storage container – For large number of VMs metadata and data files

Management Plane

  • VASA Provider (VP) – Developed by storage array vendors
  • Single VASA provider can manage multiple arrays
  • VASA provider can be implemented within the array or as a virtual appliance
  • Out of band communications
  • ESX and vCenter server connect to VASA provider

Storage Capabilities and VM Storage Policies

  • Are array based features and data services specifications that capture storage requirements that can be satisified by a storage array’s advertised capabilities.
  • Storage array capabilities define what the array can offer to storage containers as opposed to what the VM requires
  • VM storage policies is a component of the storage policy based management framework (SPBM)
  • Published capability – Array based features and data services. Advertised to ESX through VASA APIs
  • Managed on a per vDisk basis
  • Has a concept of compliance to ensure service level is being met

Operations Scenarios

  • Can offload: VM provisioning, machine deletes, full clones, linked clones, snapshots
  • Snapshots: Managed by ESX OR managed by the array

Note: SRM will NOT support vVols in this release. You will need to wait for the next release for this support.

Q&A: Future VVols will allow storage pool to span physical arrays.

VMUnderground Opening Acts: Storage Panel

IMAG0120This is not an official VMware session, but one put together by vBrownBag and VMUnderground. This is a panel on storage.

Panel: Wade Holmes (VMware), Michael Webster (Nutanix), Matt Vogt (Simplivity), Matt Cowger (EMC), Keith Norbie, J Metz

1. What are vVols? Storage policy management vision allows you to manage storage policies at the VM level (replication, dedupe, etc.)

2. Panel agrees that LUNs suck. Wade says that ¬†with LUNs the VM admin is normally disconnected from the storage admin, and the VMware admin can define policies and have complete visibility into the storage layer. Matt says VMware’s implementation of LUNs “sucks”, and the limit of 256 devices is very limiting and hard to manage. Wade says vVols is the way to eliminate the LUN limitations.

3. Audience asks if the array needs to support vVols and who supports it. NetApp, HP, Dell and EMC have all said they will support it. May not be across all arrays from these vendors. Ask questions of your vendor like how many vVol objects it will support, or what features it will support (inline dedupe, replication, etc.)

4. Audience asks what happens in the array with a vVol. The array vendor can implement vVols any way they want, like creating a LUNs on the back end or doing some file system magic. Even within a company, EMC said their implementations are different with different products. VNX implementation is very different from that in ExtremeIO and VMAX. Some vendors will have vVol demos in their VMware booths.

5. It is not known how vVols will be packaged (vSphere add-on, specific licensing levels, part of VSAN, etc.). Stay tuned for official announcements.

6. With vVols the storage admin will still need to instantiate a container for the VM, but then the VM admin will provision VMDKs on the new datastore.

7. There’s a hands-on-lab at VMworld for vVols if you are attending this week.

8. Audience stated they hate VMware snapshots with VADP. Wade states the snapshot integration is vendor dependent, so the answer about integration is ‘it depends’.

9. Next topic up is Hyperconverged storage. The number of workloads that can run on hypercovered storage is rapidly expanding. Matt states hyperconverged will not reach the high end workloads like databases on superdomes. Michael and Matt agree and hug on stage about mainframe and superdome workloads not going the hyperconverged route.

10. Audience states that old apps will likely stay on old platforms, and Fibre Channel will remain in the datacenter. New web-scale apps will take more advantage of these hyperconverged platforms like VSAN and Nutanix.

11. Michael states that flash should be in every system. Flash will enable storage systems to keep up with Moore’s law, while spinning disk cannot.

12. Flash could cause bottlenecks in the system, it just shifts the position of the bottleneck in the system.

13. Matt states that the performance of flash has not changed dramatically since it was introduced. New technology like phase change are needed to increase flash speeds.

14. Watch out for NVME coming next year, with massive performance increase for storage.

 

 

 

© 2017 - Sitemap