VMworld 2012: vSphere 5 Storage Best Practices INF-STO2980

Speaker: Chad Sakac, Vaughn Stewart

This was a really great session! Had two of the best sessions on the last day of VMworld.

  • Each protocol has different configuration considerations
  • Majority of customers use block protocols (iSCSI, FC, FCoE)
  • NetApp: NFS 50% usage, block the other 50% from autosupport data
  • Best flexibility come from a combination of VMFS and NFS
  • Key Point 1: Leverage Key documents
    • VMware technical Resource Center
    • FC SAN Config Guide
    • iSCSI SAN Config Guide
    • Best practices for NFS storage
    • Key partner documents – Best practices
  • Key Point 2: Setup Multipathing Right
    • vSphere Pluggable Storage Architecture (SATP) has several components
    • Don’t be inclined to make changes to the defaults..makes it more complicated and adds risk to the design. Don’t change the claim rules or I/O defaults.
    • PSP – Path Selection Policy
      • Fixed – Used commonly on active-active arrays
      • MRU – Default for many active-passive arrays
      • Round Robin – Default in vSphere 5.1 for EMC VNX/VMAX.
    • MPP
  • ALUA – Asymmetric Logical Unit Access. Common on mid-range arrays like NetApp and EMC VNX, and many other brands. Not true active/active for all paths and all LUNs.
    • Active – Optimized
    • Active – Non-optimized
    • Standby
    • Dead – APD – Target/array toally dead
    • “Gone away” – PDL – Can reach the array, but device such as LUN went away
  • Multi-Pathing with NFS
    • Significantlly different multi-pathing architecture than block protocols
    • NFSv3 is very basic in terms of understanding of multi-pathing
    • Must rely on switching technology for link aggregration
    • Single TCP connection from the ESXi for data and control information
    • Active/Passive path today until a future release of vSphere with NFS4
    • Use vendor specific vCenter plug-ins to enhance NFS support/configuration
  • Microsoft Cluster Service
    • Unsupported Storage Configuration – FCoE, iSCSI, NFS, Round Robin PSP, NPIV
    • Vendor support: 3rd party MPPs or Guest connected storage
    • Use iSCSI in guest – works very, very well (storage partners support this)
    • vSphere 5.1 has expanded support – up to 5 node cluster support
  • NFS Best practices
    • Use vCenter plug-ins, always! Automates configuration and tweaks
    • You can use FQDNs now and it will work
    • NetApp Cluster-Mode requires one IP per datastore
  • Jumbo Frames?
    • Recommendation is to NOT use jumbo frames. Adds more complexity, and performance increase is very marginal.
    • Stick with standard block sizes.
  • Optimize I/O
    • Misalignment of filesystems results in additional work on storage conroller to satisfy IO request
    • Affects VMFS and NFS datastores
    • Align the guest partitions! Automated in fresh installs of Windows Server 2008/Win7 and later
    • Linux will likely not align partitions. Must manually align paritions.
    • EMC UBerAlign – Free tool
    • NetApp – Data Ontap 8.1.1 – VSC plug-in Migrate and Optimize
  • Leverage Plug-Ins (VAAI and VASA)
    • 5.1 changes: vCenter Client plug-ins, NFS assists, block assists. See Chad’s Blog post
  • Keep It Simple
    • Use large capacity datastores
    • Avoid extents
    • Avoid RDMs
    • Array end – Use thin volumes and LUNs
  • Storage DRS
    • Use it! Even if in manual mode, it will make recommendations

VMworld 2012: Virtualizing SQL Server 2012 APP-BCA1516

Speakers: Michael Cory, Jeff Szastak

This session was by far the best this week! 160 slides in 60 minutes, packed to the gills with actionable information on properly virtualizing your SQL server. In short, there’s no reason why you can’t virtualize SQL and have it meet business performance requirements. 80% of SQL performance problems are from storage issues. Below is about 10% of the information from the session. For those of you attending VMworld, run, don’t walk to download their whole slidedeck from socialcast.

  • If you can guarantee the amount and get it there when you need them, the application should not care if it is virtualized or not.
  • Why customers are virtualizing biz critical apps? 60% reduction in CapEx, 30% reduction in OpEx, 80% reduction in energy
  • #1 reason a Windows server crashes are drivers. VMware regression tests all drivers together.
  • Virtualized DB example: 8TB, 8.8 billion rows, 79K IOPS, 40,000 users, 52 million transactions a day. You can virtualized SQL, no question.
  • Virtualizing SQL Server 2012
    • What works in tier-2 does not always work for tier-1 applications
    • The approach outlined today is conservative.
    • Read the VMware documentation. Professional Association of SQL Server.
    • virtualization.sqlpass.org
  • PLAN, PLAN!!
    • SLAs, RPOs, RTOs
    • Baseline current workload
    • Estimated growths
    • I/O requirements, Licensing
    • etc..
  • Baseline physical database infrastructure
  • Multiple Tier Approach
    • Basic (low utilization, test/dev) and premium (production, high visibility)
    • Different underlying hardware
    • Different SLAs, RTO, RPOs and HA between tiers
  • Use ESXtop (KB 1006797)
  • #1 reason SQL virtualziation fails are storage performance issues. Your database is an extension of your storage.
  • DBAs need to tell vSphere admins: IOPS/throughput, CPU MHz, Memory Total GB, network bandwidth, Features (clustering), anticipated growth rates.
  • Do basic throughput testing of the IO subsystem prior to deploying SQL server. SQLIO/IOmeter are the tools to use.
  • Microsoft fully supports SQL 2005, 2008, 2012 on ESX
  • How do you install SQL on VMware? Use the same configuration guidelines as physical.
  • Characterize your workloads
    • OLTP –
    • Batch/ETL
    • DSS
  • Scale wide for SQL VMs – More VMs, better isolation, better performance, less risk
    • Don’t do multiple instances on a single VM. Use more VMs, not fewer huge VMs.
  • Spindle count and RAID configuration still rule
  • Use more virtual controllers (e.g. upto 4) to allow Windows to queue up more I/O than physical server with just two HBAs.
    • More I/O inflight to the array with VMware
    • Understand your physical infrastructure
  • VMFS vs. RDM = Performance is the same. Don’t use RDMs just for performance. RDMs needed in special cases like clusters.
  • Use Thick Eager Zeroed disks every time. SUPER IMPORTANT. Database, logs, tempdb. Don’t have to do it for your OS VMDK.
  • Make sure your array supports VAAI and VASA for best performance
  • Perform Volume maintenance task right to the SQL service account
  • TempDB one datafile per database
  • Always use the PVSCSI adapter – 12% I/O throughput increase, 30% CPU savings
  • 80% of performance problems are due to storage problems
  • Maintain 1-1 ratio of physical cores to vCPUs to start – increase later
  • Hyperthreading – 20% uplift in CPU power
  • Hardware generation matters – Use latest hardware with virtualization features
  • Memory Settings
    • SQL Max Memory = VMMem – ThreadStack – OS memory – VM overhead
    • E.g. 32GB RAM = 28GB set mem max in SQL
  • Lock pages in memory – SQL service account needs “Lock pages in memory” rights
  • Size your VM to fit within a NUMA node if possible – Look at total system memory
  • Avoid shares and limits unless you really understand how they work
  • Exceeding host memory can cause swapping and tank I/O
  • Don’t turn off ballooning!
  • ESXtop stats: KAVG – At zero, DAVG –
  • Set appropriate reservations – Use configured memory size for reservation
  • When using reservations – Switch from slot sizes to % for HA configuration
  • Use large pages switch in SQL server – Easier in SQL 2012
  • Set packet size to 8192 in SQL 2012 if you use jumbo frames end to end
  • Use VMXNET3 network driver
  • Use multi-NIC vMotion in vSphere 5.0
  • Look at the SQL 2012 licensing FAQ by Microsoft
  • AlwaysOn Availability Groups – Zero data loss via log shipping and no quorum disks
  • vMotion will be supported for AlwaysOn availability groups
  • Use SQL Server Best Practice analyzer
  • 10ms>

VMworld 2012: VMware vSphere Hardening to achieve regulatory compliance INF-SEC1840

This was a panel discussion going over the history of the vSphere hardening guides, current state, some common issues, and future direction for ensuring compliance with regulations such as HIPPA, PCI, DoD STIGs, etc. Upshot is that VMware really listened to customers with there vSphere 5.0 hardening guide and it’s now in a spreadsheet format. Their Configuration Manager product can also scan your environment against the hardening guides and provide a comprehensive report. Next year Configuration Manager will have additional security scanning enhancements.

  • History of the vSphere Hardening Guide
    • Security best practices document, mostly a Linux security best practices guide since the console operation system had the most security concerns. ~2008
    • vSphere 4.0 Hardening Guide – Guidelines organized into formal sections and tabular format. ~2010
      • PDF format was hard to cut and paste from, and limited mitigation and verification information. Categorization not standardized.
    • vSphere 5.0 Hardening Guide
      • Excel spreadsheet format only
      • Better organized
      • Categorized by component (VM, vSphere, ESXi, etc.)
      • Added PowerCLI and CLI automation steps
      • William Lam script
  • SCAP – Security Content Automation Protocol
  • vSphere5 XCCDF was created – Available soon
    • Allows tools to automate the scanning of a vSphere environment and report back
  • XCCDF can present human readable text for manual remediation steps
  • Future: VMware Configuration Manager will possible do SCAP scans
  • Security Hardening: The Past
    • Not timely, different output formats, not always automated, homebrew scripts
  • Security Hardening: The Present
    • OVAL – Open Vulnerability Assessment Language
    • Community driven, supported definitions
    • Supports multiple platforms: Windows, Ubuntu, Solaris, RHEL
    • OVAL strengths: Unified format, scoring, wide adoption, more timely, extensible
    • OVAL weaknesses – Host based, not cloud ready, default vulnerability scanning
  • VMware vCenter Configuration Manager
    • SCAP 1.0 validated
    • Speaks XCCDF, OVAL
    • Assess OS Patch Status
    • Unified Reporting of results
    • SCAP 1.2 in progress
    • Provides auditing and remediation
    • VCM in 2013 will provide much more robust support
    • Supports Windows hardening checks
  • Draft vSphere 5.0 (not 5.1) DISA STIG maybe out by the end of the year.

VMworld 2012: vSphere HA and Datastore Access Outages INF-BCO2807

This session was extremely technical and went over the inner workflows of HA. For a better and more in-depth details, I would strongly suggest getting the VMware vSphere 5.1 Clustering Deepdive book.

  • HA protects against three failure modes: Host/VM failures; host network isolated and datastore PDL; Guest OS hangs and apps crashes
  • Datastore accessibility outages occur infrequently but have a large cost
  • vSphere 5.0 introduced FDM, or Fault Domain Manager, which completely replaces the 4.x HA agent and software.
  • Datastores are used for two purposes by HA: Communications channel between FDMs and persistent storage for configuration information
  • Heartbeat datastores – two chosen by each host, enables the master to detect VM power states.
  • Best practice: Use “leave powered on” host isloation response option
  • In 5.0 U1, Permanent Device Loss (PDL) the guest I/O will trigger the VM to be killed, and HA will restart it on a host that can access the datastore.
  • Futures for HA
    • Add support for All Paths Down (APD)
    • Tiggere by PDL/APD declaration rather than guest I/Os
    • Full customization of responses
    • Full user interface and detailed reporting
    • VM placement sensitive to accesibility

VMworld 2012: Securing the Virtual Environment: How to Defend the Enterprise OPS-CSM1209

Speakers: Davi Ottenheimer, Matthew Wallace. Book: Securing your Virtual Environment

This session was an overview of security considerations you need to keep in mind when virtualizing your environment. In fact, most of the recommendations apply to physical IT systems as well. The speaker went through the 10 chapters of their book (link above) with a high level summary.

  • Outsider Attack
    • Outsiders not necessarily unknown
    • ROle based access requires roles – Sometimes not enough roles provided
    • PKI is critical but fragile
    • Credentials are insufficially strong
    • Log as much as you can
    • Log shells in particular
    • For Unix consider sshd ForceCommand to stop unauthorized tunnels
    • Only install software from trusted sources
    • Check package signatures
    • Use two-factor authentication for management tools
  • Making the Complex Simple
    • Panacea fixes gone horribly wrong – IDS not plugged in
    • Simple attack vectors – Unprotected wires
    • Do NOT ignore the vSphere client SSL certificate warning message. Fix this problem ASAP.
  • Abusing the Hypervisor
    • Risk is manageable
    • Log and monitor
    • Protect your end points
    • Load “tenant” VMs and try being promiscuous
    • Mount iSCSI targets or NFS shares
    • Port-scan yourself
    • Automate config checks – XCCDF and OVAL
  • Logging and Orchestration
    • No standard log format

VMworld 2012: vSphere Performance Best Practices INF-VSP1800

Speaker: Pete Boone

This session covered various aspects of a virtualized infrastructure that need to be looked at when optimizing performance. The basic four food groups are memory, CPU, network, and storage.

  • Benchmarking and Tools
    • Consistent and reproducible results
    • Important to have a baseline of acceptable performance
    • Determine baseline of performance prior to deployment
    • Avoid subjective metrics, stay quantitative
    • Benchmarking should be done at the application layer
      • Use application-specific benchmarking tools and load generators
    • Isolate variables, benchmark optimum situation before introducing load
    • Understand dependencies (human interaction, compare apples-to-apples)
  • Tools – vCenter Operations, ESXtop
  • Memory
    • vRAM + overhead = maximum physical memory
    • Transparent page sharing
    • Ballooning
    • Compression
    • Swapping
    • Right sizing – Better to over-commit than under-commit
    • Don’t use memory limits!
    • Ballooning is a warning sign, but not a problem
    • Swapping is a problem if over an extended period
    • Swapping/paging at the guest level – Under-provisioned guest memory
    • Missing balloon driver (VMware Tools)
    • Best practices
      • Avoid high active host memory over-commitment
      • Right-size guest memory
      • Ensure there is enough vRAM to cover demand peaks
      • Use fully automated DRS cluster
      • Use resource pools with high/normal/low shares
      • Avoid custom shares setting
  • CPU
    • CPU cores/threads have to be shared among all VMs
    • ESXtop
      • %USED – Physical CPU usage
      • %SYS – Percentage of time in the VMkernel
      • %RUN – Percentage of total scheduled time
      • %WAIT
      • %IDLE – %WAIT – %IDLE can be used to estimate IO wait time
    • vCPUs
      • Relaxed co-scheduling in vSphere 4.x and higher
      • Idle vCPUs incur a scheduling penalty
      • Configure only as many vCPUs as needed
      • Use uniprocessor VMs for single-threaded applications
    • CPU Ready Time
      • Does not necessairly indicate a problem
    • vCPU to pCPU allocation – Hyper-threading adds about 30% performance
    • Don’t set too may limits or reservations
    • Right sizing vSMP VMs
  • Storage
    • ESXTOP views – Adapter (d), VM (v), Disk device (U)
    • High DAVG – Issue beyond the adapter
    • High KAVG – Issue is in the kernel storage stack – Driver issue, queue
    • Use Storage DRS
    • Snapshots – causes extra load to locate blocks
    • Excessive traffic down one HBA/switch/storage processor can cause latency
    • Use paravirtual SCSI adapater
  • Networking
    • Load balancing on Port ID is the most compatible
    • Check counters for NICs and VMs
    • 10Gbps NICs can incur significant CPU load when running at 100%
    • If using jumbo frames, ensure it is enabled end to end
    • Use VMXNET3 adapter

VMworld 2012: vSphere HA Recommendations INF-BCO2382

Speakers: Josh Gray, Jeff Hunter

This was a great session for real-world HA tips and tricks of the trade. Everyone should be using vSphere HA, so theh tips apply to everyone. For more information check out the most excellent vSphere 5.1 Clustering Deepdive book by Duncan and Frank.

  • Recent Enhancements in vSphere HA
    • Fault Domain Manager released in v5.0. Completely re-written from scratch.
    • No more reliance on DNS; just uses IP communications
    • Datastore heartbeat added
    • vSphere 5.1: FDM agent included in image, no more manual autodeploy configuration. Can manually set slot size. Enhanced PDL (permanent device loss) handling.
  • HA Networking Recommendations
    • Redundant management network
    • Fewest hops possible
    • Route based on originating port ID (don’t enable Etherchannel)
    • Failback policy = no, if PortFast is not enabled
    • Enable PortFast, Edge
    • MTU size the same
    • Keep things simple
    • Consistent porgroup names, network labels are consistent
    • Host monitoring during network maintenance
    • Use maintenance mode
    • Separate subnet for vSphere HA
    • Specify additional network isolation address, specially if default gateway will not return pings
    • Advanced settings: das.AllowNetwork, das.isolationAddress, das.usedefaultisolationaddress
  • HA Storage Recommendations
    • HA selects two datastores by default
    • Override auto-selected datastores only when needed (e.g. there is one highly available array and one that’s less available)
  • Host Isolation Response
    • Policy depends on what is most likely to fail, or retain access to (see slide below)
  • Admission Control
    • Static Number of Hosts – Sets aside the amount of resources for a single host.
    • vSphere 5.1 allows you to explicitly set the slot size, and re-words the setting for host cluster failures tolerates setting.

    VMworld: Architecting Storage DRS Clusters, INF-STO1545

    Speakers: Frank Denneman, Valetin Hamburger

    This was a great session covering the ins and outs of storage DRS. SRDS is a great feature of vSphere 5.0, with a couple of minor tweaks for vSphere 5.1. But understanding how it works, when to use it, and when not to are very important.

    • Why storage DRS? Resource aggregation, initial placement, datastore maintenance mode, load balancing, affinity rules.
    • Resource Aggregation
      • Simplifies storage management
      • Single I/O and capacity pool for initial placement
      • Datastores added or removed transparently
      • Storage DRS settings are configured at datastore cluster level
    • Initial Placement
      • Select on space utilization and I/O Load
    • Maintenance Mode
      • Evacuates all VMs and VMDKs
      • Compared to host maintenance mode
    • Load Balancing – Most popular feature
      • Triggers on space usage and latency threshold
      • 80% space utilization and 15ms I/O latency
      • Space balancing is always on
      • I/O workload can be disabled
      • Manual or fully automated mode
      • Triggered every 8 hours – Uses 16 hours of performance data.
      • VM migrations can happen once a day
      • SDRS will do cost/benefit analysis of a move
    • Affinity Rules
      • Intra-VM VMDK affinity – Keep VM’s VMDKs on same datastore
      • VMDK anti-affinity – Keep VM’s VMDKs on different datastores. Can be used for separating log and data disks of a VM.
      • VM anti-affinity – Keep VMs on different datastores. Maximize availability of a set of redundant VMs.
    • vCloud Director 5.1 is compatible with SDRS
    • DRS cluster can connect to multiple datastore clusters
    • Datastore cluster can connec to multiple DRS clusters
    • SDRS does NOT leverage Unified vMotion (no shared storage)
    • A datastore cluster can contain datastores from different arrays
      • But cannot leverage VAAI, so storage vMotion will take longer
      • Used mostly for storage array migrations
    • Can’t mix NFS and VMFS datastores in a cluster
    • Strongly recommend use VMFS 5 (unified block size)
    • Don’t upgrade VMFS datastores from 3.x to 5.x. Format LUN from scratch for consistent block size.
    • Recommend same sized datastores for datastore clusters. Multiple sizes can work, but not a good idea.
    • Big datastores are a large failure domain
    • SIOC is not supported on extents, so SDRS cannot I/O load balance.
    • SDRS is thin provisioned VM aware and cost calculations incorporates actual space used
    • SDRS looks at growth rate of thin provisioned and adds that to the calculation
    • Datastore defragmentation – Pre-requisite move can take place to optimize VM placement
    • vSphere 5.1 – Advanced option to keep VMDKs together
    • Cannot mix datastores with different storage capabilities (SSD, FC, SATA). Not prohibited, but don’t do it.
    • Use storage profiles to identify performance/SLA/location
    • What about array based tiering?
      • SIOC injector opens random blocks and may not get accurate info
      • Device modeling can be thrown completely by array based tiering
      • Set datastore clusters to manual I/O load balancing, or totally disable

    VMworld: Tech Preview of Software-defined Storage Technology

    This session focused on VMware’s concept of software defined everything, and in particular for this session, software defined storage. The session wasn’t all just vaporware PointPoint slides. They had a pre-recorded video of configuring their “distributed storage” and showing how storage profiles worked. For example, I could set the SLA on a VM for a minimum of 100 read IOPS, maximum of 5,000 write IOPS, and other parameters. vSphere will then move the VMDKs dynamically to the right storage pool automatically to ensure the SLA is met. Pretty cool stuff, if it works as advertised. I’m completely speculating here, but I expect this will be included in vSphere 6.0 coming out late next year if they maintain their goal of yearly vSphere releases.

    Session notes:

    • Automating for all apps across all storage
    • Industry trends:
      • 2016: Up to 60TB on a 3.5″ disk
      • Flash is becoming a viable storage platform
      • Data is growing. 9x data growth between 2010 and 2015
      • New application architectures – Mobile apps, Hadoop
    • New Approach to Storage
      • Converge with compute platform
      • Be managed as a resource
      • Scale on demand
      • Provide per-VM automated SLA management
    • VMware Distributed Storage
      • Software layer built into the hypervisor
      • Uses local storage (SSD and HDD) on ESX hosts
      • Converges storage and compute platform
      • VSAN is a property of a vSphere Cluster
      • Dynamic scaling
      • Not all hosts have to be identical, and not even require local storage
      • Data store is persistent storage aggregated from local HDDs across hosts
      • SSDs used for performance acceleration – Read caching and write buffering
    • Policy-Based Management
      • Configure capacity, availability, IOPS – Enforce SLA for VM life cycle
      • Profiles are pre-defined policy templates
      • Every virtual disk can have a different policy configuration
    • Cluster-wide storage accessibility – Every VM can provision and execute on any host and dynamically move around
    • Fault tolerant against host and storage failures
    • Demo
      • 7 ESX hosts with no local storage
      • 4 ESX hosts with 1 HDD and 1 SSD
      • Adding hosts to a cluster automatically adds local non-provisioned storage into the VSAN datastore. Datastore capacity increases as hosts are added.
      • Shows creating a storage policy that can set read/write IOPSs min/max levels, and availability.
      • Shows creating a new VM, and selecting the storage availability policy and the VM is placed on the distributed datastore.
      • Shows vMotioning a VM from one node with no local storage to another node with no local storage, yet still using the distributed datastore.
      • Shows a host crashing which had local storage, yet the VM continued to run uninterrupted
    • Primary Use-Cases
      • VDI – Simple to use, no bottlenecks
      • Test and Dev – Fast provisioning, lower TCO
      • Big Data (Hadoop)- Scale out, high bandwidth
      • DR Target – Reduce hardware at remote site
    • Cisco UCS
      • Rack Servers – 24 drives, HDD, SSD, PCIe Flash
      • Blade servers with HDD, SSD, PCIe Flash
      • Expect to see integrated products from Cisco to use VSAN
      • Stateless blade servers can access lots of local flash disk on rackmount servers

      VMworld 2012: Insight into vMotion and Futures INF-VSP1549

      This session recapped how vMotion currently works, enhancements in vSphere 5.1, and a “future” version. The biggest news for vSphere 5.1 is Unified vMotion, which requires no shared storage. Now on-par with Hyper-V 3.0. Future vMotion capabilities are very cool….in short, live vMotion any VM anywhere around the globe. Session summary:

      • vMotion is deployed in 80% of environments
      • 5.5 vMotion operations occur every second around the world
      • vMotion uses iterative memory pre-copy – Copy rate must exceed page dirtying rate
      • Storage vMotion uses 64MB copy regions
      • Unified vMotion – New to vSphere 5.1
        • Entire VM state (storage and memory) is migrated without shared storage
      • Performance of Unified vMotion
        • Same disk migration time for storage vMotion and Unified vMotion
        • Higher total migration time for Unified vMotion due to memory copy process (very minimal increase)
        • Multi-NIC feature is useful for large VMs
        • Performance in metro scenarios (.5ms 5ms and 10ms) is the same, as it uses the same metro vMotion code path optimizations as vMotion
      • Best Practices
        • Upgrade to latest VMFS file system if you aren’t using 1MB blocks
        • Use multiple NICs for vMotion
          • Same subnet for all IPs
          • Configure each vmknic to use a different vmnic
        • Provision at least 1Gbps management network when migrating VM with snapshots
        • When the source host has access to destination datastore it can use VAAI offload
      • Future vMotion capabilities
        • Cross-vCenter vMotion – Enable long-distance vMotion architectures
          • Simultaneously change compute, storage, networks and vCenter
          • Requires L2 connectivity between vDS portgroups using same VLAN, VXLAN, or third party L2 extensions
          • UUID is tracked and kept for the life ofthe VM
          • Events, alarms, task and stats history preserved across datacenters
          • HA and DRS are supported, including admission control
          • vCenters must belong to the same SSO domain, metro distances only
        • Cross-VDC (virtual datacenters) vMotion
          • Live migration of vApps across local VDC boundaries
          • Operation initiated by admins and tenants
        • Long-distance vMotion
          • Support geo distances
          • Disaster avoidance
          • Requires secure VM migration channel, L2 connectivity, global site load balancer and NAT servers, unified vMotion, cross-center vMotion.
          • Supports three architectures – Unified vMotion with share nothing; Utilize active-passive storage replication using virtual-volumes; Utilize active-active asynchronous storage plus virtual volumes
          • vMotion anywhere (private to private, private to public, public to public) clouds