Archives for April 2013

San Diego VMUG: Infrastructure Convergence with SimpliVity

This San Diego VMUG session was put on by Gabriel Chapman of SimpliVity. If you’ve never heard of SimpliVity, you aren’t alone. They are a start-up and emerged from stealth mode last year, and have been shipping product for about a month. Their OmniCube is an all-in-one 2U platform consisting of compute, memory, storage, ESXi, inline de-dupe and compression, SSD, SATA and a management layer.

You no longer have to manage separate computing and storage devices. Multiple OmniCubes can be federated for high-availability and remote data replication. Two key tenants of the platform are simplicity and cost efficiency. It was a high level session, so it didn’t delve into a lot of technical details. But I did grab a few notes from the presentation:

The Promise of Convergence

  • Consolidation, greater flexibility, Ease of use
  • VMware enables convergence, but it’s not all roses. Increased complexity, siloed management (storage, network, compute)

Efficient Data Management Features of the OmniCube:

  • Data efficiency – De-dupe and compression
  • Data mobility- Rapid clones, replication
  • Management – vCenter plug-in, globally managed
  • Deployment – Simple install, VMware integrated, scale out
  • Obsolete – LUN provsioning, RAID

SimpliVity OmniCube

  • 3x lower acquisition cost, power and operating costs
  • Combines compute, networking, storage (SSD and SATA)
  • Real-time inline data dedupe/compression
  • Global management
  • Each cube has a standard configuraiton of memory, disk, CPUs

This looks like an interesting product for certain use cases. Could be great for branch offices where you may not have highly skilled engineers, or smaller shops that want to virtualize but don’t have virtualization and storage gurus. Or maybe departments in large companies that need to do their own thing, and want something easy, affordable, and fully converged.

San Diego VMUG: Overcome Challenges with Tier-1 Apps

This was the best session of the day at the San Diego VMUG! Dave Elliott and Dave Troutt from Symantec presented the detailed methodology Symantec used to define requirements essential to successfully virtualizing tier-1 apps like SQL Server, Exchange and SharePoint. Bottom line is you can virtualize nearly any app that doesn’t rely on unique hardware, but major tier-1 apps need special consideration. They actually built up a lab (based on HP hardware) and performed extensive testing. The 28-page Whitepaper is here. I also learned Symantec has an EMC PowerPath-like product, called Dynamic Multi-Pathing for VMware.

Virtualization Tailwinds

  • What’s driving the push to virtualize tier-1 applications?
  • Implementing a “virtual first” policy
  • Consolidate IT infrastructure
  • “90% of all enterprise applications will be virtualized within the next two years” – VMware
  • Virtualization Journey: Capex savings, Opex Saving, Self-service (IT as a service)

Virtualization Headwinds

  • Design challenges: High SLAs, security, Governance, Large Data, Performance
  • Five domains within the archtiecture: Data protection, storage management, high availability, security, archiving

Tier-1 App Platform Design Objectives

  • Ensuring SLAs for performance, scalability and availablity can be met. Parity with physical or better.
  • Support large databases
  • Provide non-disruptive, off host backups for all types of storage (VMDK, RDM)
  • Enable fast granular recovery of files
  • Reduce infrastructure and management costs
  • Security and compliance

Required Capabilities

  • Pools of fast, resilient, dynamic storage
  • Thin provisioning, thin reclamation, snapshots
  • SAN or iSCSI connectivity with multi-pathing
  • Support VMDK and RDM devices
  • Provide visibility, monitoring, reporting, management and chargeback
  • Provide visibility, reporting and management of availability across all application tiers
  • Security needs to harden, protect, and monitor the systems against unauthorized access and changes
  • Automatically archive historical data onto less expensive storage

TOGAF – The Open Group Architecture Framework. It is an excellent framework to properly document your IT architecture in a simple but meaningful manner.

Symantec Reference Architectures for SQL 2008, Exchange 2010, SharePoint 2010 is here:

  • Tested and validated by VMware on HP hardware (ProLiant servers and 3PAR storage)
  • Tested HA, disaster recovery, data protection, thin provisioning, security, reporting
  • Key products: NetBackup, VMware HA, Application HA, Veritas Cluster Server

Find out more at:

San Diego VMUG: vCloud Director Best Practices

This session covered some high level Clould Director best practices, by Spencer Cuffe (VMware). It was short session without a lot of detail, but for what it’s worth here are a few notes:

  • Pre-Reqs – One cell per vCenter, NTP across all devices, one vCNS per vCenter, one provider vDC per vSphere Cluster


  • There is no one size fits all solution
  • There are some things very challenging to change post-development
  • Allocation models: Changing may require powering down VMs to apply limits/reservations
  • Storage tiering is important (capacity, tiers, I/O requirements, fast provisioning [don’t use it everywhere])
  • Networking: Configure external network first, then configure DVS first
  • IP addressing withing the VM will require power cycle if changed
  • VXLAN: Plan for it and prepare vCNS with it first
  • Use distributed switches for everything

vCloud Director Use Cases

  • Hosted/Public Cloud – Customer isolation, catalog isolation
  • Development – Consistency, on-demand environments
  • Testing – Dev, IT, vendor packages, etc.
  • QA – Pre-prod clean environments
  • Support desk – Test in isolated environment
  • Private/Hybrid Clouds


  • Storage gets used quickly
  • Services: DHCP, DNS, NTP, etc.
  • Who is responsible? Administrators, organization admins, template maintenance, firewalls, remote access, etc.
  • Access to VMs? Jump box, remote console, SSH, etc.

Redundancy and Maintenance

  • Load balance cells
  • VMware vCenter heartbeat
  • Consider VMware FT/HA for vCNS
  • Use maintenance mode when doing maintenance on a cell
  • Use “Display debug information” to dig deeper into error messages
  • Configure syslog to capture all the activities centrally

Using vCD

  • Set HA host failures to a percentage instead of N+1
  • Use VMware Orchestrator to automate common tasks
  • When using fast provisioning, end users should have a limited lifecycle for vApps.

San Diego VMUG: VMware Backup Tips from Veeam

At the San Diego VMUG this session was presented by Rick Vanover from Veeam. He covered some tips and tricks for doing VMware backups. Of course the session was Veeam focused, and highlighted features of their backup software.

Tips for efficient VM Backups

  • Take a modern approach
  • We have a robust platform with vSphere, use it
  • Seek an approach built for virtualization – Remove burdens of agents inside VMs, easy on VM administrator, and scalable
  • Do not put the backup data where the VMs are running (e.g. SAN). Storage can fail.

Disk-Based Backup Flexibility

  • Many people do disk-to-disk backups with VMs
  • vSphere gives you a framework with many options (VDDK, VADP, direct SAN access, etc.)
  • Hardware dedupe appliances are more efficient than backup software dedupe

Upgrades with preparation

  • A virtual lab can help you ensure that a critical upgrade will go smooth as planned
  • Restore VMs to a sandbox and test upgrades without affecting production

Know where Bottlenecks Exist

  • Backuping up VMs have many moving parts
  • It’s important to know what may be slowing down your backups
  • Source storage, network, disk target, CPU resources, backup window

Veeam Explorer for Exchange

  • Restore Exchange items directly from the Veeam backup
  • Search and browse across one or more Exchange databases
  • Recover emails, contacts, calendar items, tasks, etc.
  • No agent needed, free in all editions of Veeam Backup and Replication

San Diego VMUG: The Power of Server Side Caching

This was a good session presented by Proximal Data, focusing on using flash-based cache in your VMware environment. They have a product called AutoCache that boosts storage performance for VMware environments. Sounds like an interesting product. Session went by very fast, but here are some of the notes I took:

Proximal Data Company profile:

  • Vision: I/O Intelligence in the hypervisor is a universal need
  • Near term value is in making use of flash in virtualization

Overview: Proximal Data AutoCache

  • I/O caching software for ESXi 4.x to 5.x
  • Up to 2-3x VM density improvement
  • Business critical apps accelerated
  • Transparent to ESXi value like vMotion, DRS, etc.
  • Converts a modest amount of flash
  • Simple to deploy: Single “VIB” installed on each ESXi host
  • vCenter plug-in: Caching effectiveness, cache utilization by guest VM

Case Study

  • Month end processing report now takes 6.5 hours instead of 36.5 hours
  • Eliminated need to vMotion other guests off during month end processing
  • Tripled VM density on database servers
  • Decreased SAS analytics report time by 85%

Flash – The Good

  • Much faster than disks for random I/O – Sequential I/O performance difference is not as dramatic
  • Cheaper than RAM

Flash – The Bad

  • More expensive than spinning disks
  • Slower than RAM
  • Asymmetric read/write characteristics – Reads are much faster, writes cause a lot of wear
  • Wears out/limited lifespan

Flash – The Ugly

  • Must be erased to be written
  • Erase granularity is not the write granularity
  • Typical write granularity is 512 bytes, typical erase granularity is 32K, 64K or 128K
  • Write/erase characteristics have lead to complexity (Flash translation layers, fragmentation, garbage collection, write amplification)

Flash – Not all are equal

  • Steady state performance of controllers – as much as 50% performance loss in steady state vs new (stay with Intel, Micron, LSI, Sandforce, not third-tier)
  • MLC is much cheaper and higher density and is the future, but not as robust and wear out faster than SLC

Flash – Ideal Usage

  • Random I/O requests – greatest performance gains
  • A lot more reads than writes
  • Write in large chunks
  • Avoid small writes to same logical locations
  • If data is critical use SLC
  • Read caching is an ideal use of flash

Caching is Everywhere

  • Disks have caches, array/RAID controllers, HBAs, OS, application

Caching Basics

  • Working set of data is likely a subset of the data
  • Caches are used to manage the “working set” in a resouce that is smaller, faster and more costly than the main storage resource
  • Cache works best when data flows from a slower device to a faster one
  • Read caches primarily help read bound systems
  • Write-back cache primarily help bursty environments
  • Caches will continue to exist in all layers of the infrastructure

Flash in a Hypervisor

  • Most caching algorithms developed for RAM caches – No consideration for device asymmetry
  • Hypervisors have very dynamic I/O patterns
  • Hypervisors are I/O blenders
  • Must consider shared environment (latency, allocations, etc.)

Complications of Write-Back Caching

  • Writes from VMs fill the cache
  • Cache ultimately flushes to disk
  • Cache over runs when disk flushes can’t keep up
  • If you are truly write-bound, a cache will not help
  • Write-back cache handles write bursts and benchmarks well but is not a panacea

Disk Coherency

  • Cache flushes MUST preserve write ordering to preserve disk coherency
  • Hardware copy must flush caches
  • Hardware snapshots do not reflect current system state without a cache flush

Evaluating Caching

  • Results are entirely workload dependent
  • Benchmarks are terrible for characterizing devices. You can make IOmeter say anything you want.
  • Run your real storage configuration for meaningful results
  • Beware of caching claims of 100s or 1000x times improvements

Flash Caching Perspective

  • Flash will be pervasive in the enterprise
  • Chose the right amount (as little as 200GB can provide a large boost)
  • The closer the cache to the processors, the better the performance

San Diego VMUG: Software-Defined Datacenter Brief

Today at the San Diego VMUG Michael Ibarra, Office of the CTO (VMware), gave us a very quick brief on what is the Software Defined Datacenter. Here are a few highlights of his quick session:

  • Software is taking over the world
  • The shifting landscape: Delivery methods (cloud), devices (tablets), applications, work style (anywhere any time)
  • CIOs are on a quest to make infrastructure to just work
  • IT must compete for the company’s IT business (vice outsouring to public cloud providers)
  • Specialized software is replacing specialized hardware in the datacenter
  • Waves of change in IT: Mainframe, mini-computer, PC, networked/distributed computing, virtual/cloud computing
  • Apps Drive Platforms
  • Current datacenters are a conglomeration of past IT decisions (mainframe, Unix, RISC, etc.)
  • The price for computing power continues to drop at a rapid rate
  • Abilitity now to virtualize almost all applications
  • Ability to virtualize almost all hardware components (networking and storage)
  • Today it can take 5 days to approve and configure a VM, but in the future the software defined datacenter could get that down to three minutes
  • Future: Pools of compute, memory, storage, networking.
  • Abstract, pool, automate
  • SDDC is taking pool of resources and divvying them up appropriately, even though they share common hardware
  • Multiple virtual datacenters each with its own workloads and requirements on the same hardware (e.g. Finance, R&D)
  • Key: This has to apply to any and all applications (not just a few or most). Must work across the board (tier-1, SAP, Exchange, SQL, etc.)
  • Check out the vCloud Suite Editions
  • What about end user computing (EUC)? It’s not just about VDI these days.

Whoops..Blogspot redirection now working again…

Due to human error on my part, per-post redirection from my old BlogSpot blog was just sending people to the homepage of my new site, not the correct post. I fixed the problem, so now permalinks to any old posts will send you to the same post on my new site. Sorry for the confusion.

Now back to your regularly scheduled blog….

HP ESXi 5.1 Update 1 Custom ISO Released

Just like clockwork, the day VMware releases a major update to ESXi, HP releases customized installation HP Logomedia the same day. Hard to beat such great and timely support for your VMware environment! Last week VMware released vSphere 5.1 Update 1, and also posted was the HP ESXi 5.1 update 1 custom installation ISO.

If you use HP ProLiant servers, then using the HP customized ISO is strongly recommended, or even required (Gen8), for a properly functioning host. Baked in a tested drivers and HP management tools that enable full ProLiant functionality with VMware ESXi. The HP ESXi 5.1 update 1 custom ISO is your one stop shop for an optimized ProLiant server.

Remember, as always, to keep your HP ProLiant server firmware up to date with supported versions. That is very important for ESXi hosts, as the firmware and drivers are tested together in bundles. For maximum stability, performance, and best practices you need to ensure your servers are under a supported recipe. You can download the latest HP Service Pack for ProLiant here.

HP Provider Features

  • Report installed licenses for HP Dynamic Smart Array Controller.
  • Report New memory properties.
  • Support for IP Address encoding in SNMP traps.
  • Support SMX MemoryModuleOnBoard association.
  • HP Dynamic Smart Array Controller split cache support.
  • Report New RAID levels for storage volume fault tolerance.
  • HP Smart Cache support.
  • Update reporting of Smart Array Cache Status to align with firmware and iLO.

HP AMS features

  • Report running SW processes to HP Insight Remote Support.
  • Report vSphere 5.1 U1 SNMP agent management IP and enable VMware vSphere 5.1
  • U1 SNMP agent to report iLO4 management IP.
  • IML logging for NIC, and SAS traps.
  • Limit AMS log file size and support log redirection as defined by the ESXi host parameter:
  • ScratchConfig.ConfiguredScratchLocation

SR-IOV Support

  • Updated Intel 10Gb network driver to enable SR-IOV for the HP 560FLB, 560M, 560 SFP+, and 560FLR-SFP+. (Note: ESXi Enterprise Plus is required for SR-IOV)

Other Enhancements

  • FCoE on Emulex CNAs
    To support FCoE on Emulex CNAs on vSphere 5.1 U1, HP recommends using the versions of the vSphere 5.1 U1 Emulex drivers as defined in the HP ProLiant server and option firmware and driver support recipe document available here.
  • iSCSI on Emulex
    iSCSI on Emulex is now supported on vSphere 5.1 U1 using the versions of the Emulex drivers defined in the October HP ProLiant server and option firmware and driver support recipe document available here.


HP VMware vSphere 5.1 U1 Customized Image, April 2013, Release Notes
HP VMware vSphere 5.1 U1, April 2013, Driver VIBs
April 2013 VMware Firmware and Software Recipe
HP Custom Image for ESXi 5.1 Update 1 Install CD
HP ProLiant Server VMware Support Matrix

VMware vSphere 5.1 Update it for you?

VMware vSphere 5.1 update 1 is probably one of the most recently anticipated updates of the VMware stack and it has finally hit the streets. For those of you following the release of vSphere 5.1, you have seen the GA release last fall, followed by 5.1.0a then a couple of months later 5.1.0b, all addressing bugs and ironing out critical installation issues.

VMware vSphere 5.1 Update 1 has a laundry list of improvements, support for new Microsoft products, and a lot of bug fixes. If you are still on vSphere 5.0 or tried 5.1 in the past and ran into problems, you definitely need to check out vSphere 5.1 Update 1. If you want a complete vSphere 5.1 installation guide, check out my 15-part blog series here. I will be updating it in the near future for Update 1. If you are running vSphere 5.1, there are a number of security vulnerabilities addressed in the update so start planning your upgrade.

VMwareKnown Issues with vSphere 5.1 Update 1

Today VMware posted a new KB warning about a vSphere 5.1 Update 1 bug, which may affect customers. The problem prevents you from logging into the vSphere Web Client using an AD account, if you AD account is a member of approximately 19 or more domain groups and the SSO service is configured with multiple domains. The KB states until a hotfix is released, DO NOT upgrade to vSphere 5.1 Update 1. In many enterprise environments a vSphere administrator may be in dozens of groups, depending on how access is controlled within the domain. Fewer customers will probably have SSO configured for multiple domains, so the impact of this issue is probably limited to larger enterprises. Additional issues include:

  • If you are using the vSphere Storage Appliance, you MUST upgrade to vSA 5.1.3 after you upgrade the rest of your infrastructure to vSphere 5.1 Update 1. vSA 5.1.1 is NOT compatible with vSphere 5.1 Update 1.
  • You can NOT use the simple installer to upgrade from prior 5.1 versions to 5.1 Update 1. You must utilize the individual installers.
  • Windows Server 2012 failover clusters are NOT supported on ESXi 5.1 Update 1. The cluster validation wizard gets stuck in an endless loop and you are unable for form the cluster.

What got updated in vCloud Suite 5.1 Update 1?

  • ESXi 5.1 Update 1 Build 1065491
  • vCenter Server 5.1 Update 1 Build 1065152
  • vSphere Data Protection
  • vSphere Replication 5.1.1
  • vSphere Storage Appliance 5.1.3
  • vCenter Orchestrator Appliance 5.1.1
  • vCloud Director 5.1.2
  • vCenter Site Recovery Manager 5.1.1
  • vSphere 5.1 Update 1 Virtual Disk Development Kit
  • vSphere CLI 5.1 Update 1
  • VMware Converter Standalone 5.1 (Download here)
  • VMware vCenter Server Heartbeat 6.5 Update 1
  • VMware vSphere Management Assistant ( – April 4, 2013)
  • HP Custom Image for ESXi 5.1.0 Update 1 Install CD

You can find all of these downloads in the usual place, My VMware. You can download the updated documentation archive ZIP bundle here. The full documentation page is here.

vCenter Server 5.1 Update Release Notes

vCenter 5.1 Update 1 is more than just bug and security fixes, it incorporates a number of newly supported operating systems and database back-ends. You can find the full release notes here. Below is just a tiny faction of the new features and bugfixes.

What’s New?

  • vCenter Server can be installed on Windows Server 2012
  • vCenter can use Microsoft SQL Server 2012 and SQL Server 2008 R2 SP2
  • Guest operating customization support for Windows 8, Windows Server 2012, Ubuntu 12.04 and RHEL 5.9
  • Removed vRAM usage limit of 192GB on vSphere Essentials and Essentials Plus

Resolved Issues

A lot of bug fixes are included, but a few highlights include:

  • Better error reporting when accidentally updating the Admin or STS service with incorrect protocol parameters. It will now tell you what you botched up.
  • Number of security patches including Java, tcServer, vCSA remote code vulnerability
  • Upgrade issues from 5.1.0a to 5.1.0b

VMware ESXi 5.1 Update 1 Release Notes

  • Mirrors the new guest OS support in vCenter 5.1 Update 1. Full 200+ page OS compatibility matrix is here.
  • Contains several security patches (glibc, libxslt, libxml2)
  • Resolved: Long running vMotion operations might result in unicast flooding
  • Windows Server 2012 failover clustering is not supported

You can find the ESXi 5.1 Update 1 full release notes here.

vCloud Director 5.1.2 Release Notes

Like vCenter 5.1 Update 1, vCloud Director has some new features and many resolved issues. Full release notes is here. The full vCloud Director 5.1.2 documentation set is here.

What’s new?

  • Ability to delegate creating, reverting, and removing snapshots
  • You can install vCloud Director on Red Hat Enterprise Linux 6.3
  • You can install vClould Director using Microsoft SQL Server 2012 databases
  • Supports customization of Windows Server 2012 guest operating systems

Resolved Issues

  • Security vulnerabilities addressed by updating Java to 1.6.0_37
  • Multiple bug fixes, see full release notes

vCenter Converter Standalone 5.1 Release Notes

The new version of Converter has added a number of great new features and broader operating system support. You can find the full release notes here.

  • Supports VM hardware version 9
  • Guest operating system support for Windows 8 and Windows Server 2012
  • Guest operating system support for Red Hat Enterprise Linux 6
  • Support for machine sources that use GPT partition tables
  • Support for systems that use UEFI
  • Support for EXT4 file system

vCenter Server Heartbeat 6.5 Update 1 Release Notes

No major changes here, but incremental support for the latest VMware products. Full release notes are here.

  • Support for vCenter 5.1 Update 1
  • Support for View Composer 5.2

vSphere Data Protection 5.1.20 Release Notes

More than just bug fixes, VMware added many new features to this build. Full release notes are here. A subset of the new features:

  • Integration with vCenter alarms and alerts notification system
  • Ability to clone backup jobs
  • New filters to restore tab
  • Expands capacity up to 8TB per appliance
  • Supports the ability to expand existing datastores
  • Supports guest-level backups of Microsoft SQL Servers
  • Supports guest-level backups of Microsoft Exchange Servers

vSphere Storage Appliance 5.1.2 Release Notes

Like vSphere Data Protection, the vSphere Storage Appliance has many new features. The full release notes are here.

  • Support multiple VSA clusters managed by a single vCenter Instance (about time)
  • Ability to run vCenter Server on a subnet different from the VSA cluster
  • Support for running the VSA on one of the ESXi hosts in the VSA cluster
  • Ability to install the VSA on an existing ESXi host that has running VMs
  • Ability to increase the storage capacity of a VSA cluster
  • Up to 24TB of storage per node
  • Multiple RAID types (RAID 5, RAID 6, RAID 10)


vSphere 5.1 Update 1 will be a welcomed upgrade to customers already running vSphere 5.1. After a rocky start of vSphere 5.1 GA, VMware has clearly been working on stability, bug fixes, and supporting the latest Microsoft operating systems and SQL databases. The vCloud Suite is ever expanding, so when you go to download all the components you will see over two dozen downloads you can choose from. If you’ve been hesitant to move up to vSphere 5.1, give 5.1 Update 1 a whirl in your lab and see if it’s stable enough for you.

vSphere 5.1 Suite

VMware Virsto Acquisition: VMUG Presentation

Last month at the March San Diego VMUG they had Mark Davis (former CEO) from Virsto give a very informative presentation and some details on why VMware acquired them. I would have gotten this article done sooner, but between getting my new blog site up and other things it’s been sitting in draft form for a month. No longer!

Having never heard of Virsto before, I really had no background in what they did except a vague idea they did something with virtualization and storage. VMware had acquired them literally just weeks before the VMUG presentation, so it was great to hear the CEO’s presentation.

VMware Virsto is, in short, a virtual appliance that acts as an I/O intermediary between the VMs and your physical storage. You can think of it as software defined storage (SDS). The secret sauce is what Virsto does to the I/O to increase performance, decrease latency, mitigate snapshot performance issues, and reduce storage space.

The VMware Virsto VSA is presented RDMs from your physical storage array (NFS stores from your physical array are NOT supported), which it formats with a proprietary file system. The VSA then presents the virtualized storage via a NFS share, which the ESXi host mounts as a datastore. Your VMs then go into the NFS datastore, not a VMFS LUN. I borrowed the diagram below from a Virsto PDF  you can find here. In this case a picture is worth a thousand words, so check out the diagram.

VMware Virsto

Virsto currently supports Hyper-V and vSphere. They also have a vCenter plug-in which has hooks for View and XenDesktop, to accelerate VM provisioning. Whether VMware will kill off the Hyper-V and XenDesktop support is anyone’s guess. I would suspect those features will be retired at some point.

Mark Davis ran through the slides pretty darn fast, and there was a lot of technical content. But I did take some notes:

Software Defined Storage (SDN)

  • Virsto for vSphere and Hyper-V
  • In deployment since 2009, 75 customers (mostly on Hyper-V)
  • Recently released Virsto for VMware (primary market was Hyper-V)
  • Storage is the weak link in the virtualization stack

VMware Virsto Software Defined Storage

  • Negatives for storage
    • Inefficient disk capacity because of VMDK and LUN mismatch (wasted space)
    • Data services tied to LUNs and not VMs (many VMs on a LUN that may have different requirements)
    • Multitude of device specific management tools
    • Can’t use server based SSD and HDD without losing VM mobility
    • Hypervisor is a storage I/O blender that makes I/O requests highly random
      • Primary cause of storage I/O problems
  • Virsto for VMware
    • Pure software, VM-centric. HCL is any server that is on the VMware list. Works with any block storage (SAS, SATA, FC, iSCSI, etc.) from any vendor.
    • Demonstrated customer value: 9.7x to 56x peak performance increase
    • Max storage reduction: 60% – 94%
    • Workloads include VDI, SQL server, Oracle (no special modules for VDI/server workloads…does it all)
    • Does not require solid state drives, but works with them
    • Can make 10s of thousands of new VMs without breaking a sweat (efficient cloning)
    • Ideal use cases: VDI, database virtualization, test & dev
    • Virsto will remain a separate product for the time being and cost $ (will not be rolled into a vSphere SKU)
    • Licensed per TB of used storage space (after de-dupe)
  • Virsto Eliminiates  Performance Degregration
    • One traditional VMware snapshot cuts I/O performance in half
    • Virsto provided 416% to 199% improvement in latency and IOPS when it does the snapshot (just a pointer in time)
  • Customer Example
    • 16 hosts, 300TB storage, SQL Server Databases, 1.5 billion transactions a day
    • Storage hardware cost went from $4500 to $1200 (per storage unit was not mentioned)
  • Virsto Internal Architecture
    • (Today) Installs one VM per host plus a separate management server per datacenter
    • Will not talk to NFS, but can use any block storage via RDMs
    • A log LUN (vLog) (10s of GBs) per host accepts all the IOs (makes is all sequential) and later destaged to physical disk. Writes are acklowledgfed instantly


There are lot of VSAs on the market which try and address the storage performance issues that virtualization causes. Atlantis Computing ILIO is aimed at VDI (now expanding to servers), plus several others. Virsto takes an interesting approach, and is designed for all workload types. Mark mentioned that now they have access to the VMware source code they’ve had had some ‘ah ha’ moments, wondering why some of the ESXi APIs were behaving the way they were. So I can only imagine that over the next several releases that their performance and integration will increase.

For the time being the product will be standalone, and require additional expenditure. Mark said this was needed to ensure product development, and that the features won’t be rolled into a vSphere SKU anytime soon. He hinted pieces of it might make it to the base platform, but by in large if you want Virsto you will be paying for it. It is licensed on a per-TB (post de-dupe) basis.