VMware 3PAR Best Practices Guide for InForm OS 3.1.2

HP recently released a new VMware ESX Implementation Guide that addresses changes in 3PAR InformOS 3.1.2, and VMware 3PAR best practices. One notable change is a new “host persona”, 11, which is now recommended for VMware ESX hosts. Persona 11 is VMware specific, and shows up as “VMware” in the CLI.

This change is very interesting, and I had to do a little digging to see why 3PAR made a whole new persona for VMware. Per the HP documentation, host persona 11 presents the LUNs as ALUA enabled, not “AA” or active/active. ALUA is how most mid-range storage arrays present LUNs, as they aren’t truly active/active concurrent like EMC VMAX, 3PAR, HDS USP, Compellent, and a few others. For an excellent write up on why true symmetric arrays like 3PAR could benefit from ALUA presentation, I found this article.
To summarize the outstanding article, ALUA on symmetrical arrays provides better ESX host compatibility if the host is accessing LUNs on different arrays with different LUN presentations (such as ALUA and AA). Some arrays get their own SATP policy in VMware, so there wouldn’t be any conflict anyway. But I guess HP felt the new mode did offer some value to customers, and now recommends it as the default.
The 3PAR will still tell ESX that all paths are active, so it’s not like a “real” ALUA array where half are standby and half are active. Don’t fret that Host Persona 11 will suddenly cripple your 3PAR array into a mid-range ALUA array. 🙂
This points out an excellent reason why you should always read the Implementation Guides for your operating systems when major versions of firmware for your storage array is released. What may be a best practice today, may not be after you upgrade!
When you change the Host Persona mode you will also need to modify your SATP claim rules, if you reconfigured them for automatic round robin configuration. A little nugget that I also saw in the HP guide is now round robin is configured for 100 IOs vs the previous default of 1000.
ESXi 4.x:
ESXi 5.x:
If you are wondering what the switch “tpgs_on” does, it’s fairly cool. Basically it’s a method by which the ESX host can ask the array what the characteristics of the path are (active/optimized, active/non-optimized, unavailable, in-transition, and standby). Target Port Grouping (TPG) allows the array to communicate the path status about, yes, a group of array target ports. So in this case, the 3PAR can tell ESX that all paths are active/optimized, to preserve the full and concurrent usage of all paths even in ALUA mode.
One final note about 3PAR and zoning. Recently HP has changed their recommendation on zoning from “one initiator to one target per zone” (resulting in LOTS of zones), to “one initiator to multiple targets per zone” (zoning by HBA). For example, if in Fabric A your host is zoned to two host ports on the 3PAR, you can now have the host HBA port and the two 3PAR ports in one zone, instead of needing two zones, one for each port.
Depending on how many paths you have configured, this could cut the zoning requirements in half. HP says you can include multiple HP array targets in the same zone, though. I would not, however, include different vendors in a single zone, so if your ESX server is presented storage from say a 3PAR and EMC, I would create separate zones for each vendor.

HP releases Introduction to 3PAR for EVA Administrators

As you may or may not have heard, last week HP announced their new mid-range 3PAR arrays, the 7000 series which I covered here. HP is clearly targeting existing EVA users with the aggressively priced 7000s, with a simple and online method to migrate active LUNs to 3PAR. Since the 3PAR architecture is so different from the EVA, HP wrote an excellent whitepaper you can get here for EVA administrators to help them understand 3PAR hardware and software. Even if you aren’t an EVA user, it’s an excellent tutorial on 3PAR and goes into a lot of technical details about the entire 3PAR series, from the legacy F and T series, to the new 7000s and 10000s. I also spotted a few details, which seem to foreshadow some upcoming 10000 enhancements.

For starters, I found this very interesting table below, showing supported disk types. For current 3.1.1 10000 owners, you know that the only choice today for disks are 3.5″ large form factor sporting either Fibre Channel or SATA (NL) interfaces. No SAS disk support, no 2.5″ disk support. I’ve highlighted in red new 10000 options. After finding that table I reviewed the latest 10000 Quickspecs (future dated to December 17th), and sure enough you will see 2.5″ SAS disks as an option!

Further reading the T-Series QuickSpecs, I stumbled on the fact that SAS drives will also be supported, which I’m surprised to see. However, it still requires 3.1.2. I didn’t see new drive cages or SAS disk cards for either the T-Series or 10000s, so I’m thinking the SFF SAS drives will be packaged in the usual four disk LFF magazines with built-in FC bridges. It appears the new SAS-based 2.5″ and 3.5″ drive shelves for the 7000 are limited to the 7000 platform, and the 10000 will continue to use FC loops for disk shelf connectivity.

The handwriting is on the wall…the EVA line is winding down, and customers should start looking at migrating to a new platform. HP is making the migration to 3PAR pretty darn easy, and totally non-disruptive for VMware, Red Hat, SUSE Linux and Solaris servers. Minor downtime is required for Windows hosts to reboot, though.

Enjoy the 27 page whitepaper…very worthwhile to read. Calvin Zito from HP also wrote a good blog article with more technical details and links on how the EVA to 3PAR migration actually works. You can find his write up here.

HP refreshes Insight Control for VMware vCenter to 7.1.1

A few days ago HP released Insight Control for VMware vCenter Server, version 7.1.1. This is a minor update from the 7.1 release, which came out not too long ago. The December release is a minor increment which now supports the 3PAR StoreServ 7000 series announced last week, and v10.0 of the LeftHand OS. You can download the update from here. If you want to know more about the brand new StoreServ 3PAR P7000s, check out my article here.

HP announces 3PAR 7200 and 7400 mini-me Disk Arrays

HP 3PAR P7400

This week in Frankfurt HP is hosting Discover 2012, where they’ve unveiled some big product announcements. The big news is the birth of the 3PAR 7200 and 7400 arrays, a new line of hardware which drastically pushes down the price point for 3PAR storage firmly into the mid-range. HP is clearly targeting customers that are looking at EMC VNX 5000s, NetApp FAS3200s, Dell Compellent, and other arrays at that price point.

You now have a single architecture, same set of tier-1 tools, from the mid-range to the high end. Bare bones configurations start at $20K for the 2-controller model. As always, HP’s Calvin Zito and fellow vExpert, has written up a short article about the new 3PAR P7000 series, and has a couple of videos to boot. You can check out his post here. The full HP announcement page is here.

But for those of you unfamiliar with 3PAR, I’ve give you a quick overview. 3PAR was a silicon valley startup which specialized in tier-1 block storage with a unique way of wide striping LUNs across all disks in your array. The founders were SUN refugees. A couple of years ago HP bought them (in a bidding war with Dell) to augment their aging EVA line, and give them an in-house tier-1 array.

The secret 3PAR sauce are “chunklets” of data which are the building blocks of virtualizing the storage and use several layers of abstraction so you no longer manage RAID sets, spindles, or spare disks that sit idle. LUNs are chopped up and evenly distributed across your entire array, in an even manner, automatically balancing I/O across all controllers and disk ports. Another key feature of the secret sauce is their proprietary 3PAR ASIC (now at Gen4) which offloads RAID calculations, zero detection, and other work off the Intel processors.

The high end model (StorServ 10800) can scale up to 8 controllers, and all have a true active/active concurrent controller architecture, not asymmetric active/active like the vast majority of mid-range arrays. 3PAR competed with EMC Symmetrix and HDS USP, which both have active/active concurrent designs. This improves I/O throughput and automates load balancing between controllers. For a geeky overview of all the 3PAR “secrets” check out the 3PAR InForm OS concepts guide here.

At VMworld 2011 HP announced the 3PAR 10000 series, which included the V400 (four-controller) and V800 (eight-controllers). However, if you needed a tier-1 disk array for under $100K, then you were left looking elsewhere or stuffing your piggy bank with dough to spring for a V400 sometime in the future.

The two new models are the 3PAR 7200 (dual controllers), and the 3PAR 7400 (quad controllers). The 3PAR 7200 is a very compact 2U, and holds 24 2.5″ drives, with expansion up to 144 disks with more shelves. The 7400 is pictured at the top of this article, and more than doubles the capacity up to 480 drives and 864TB raw storage. You can view the full Quickspecs PDF here. With an upcoming upgrade to the InForm OS, it will also support FCoE on the optional CNAs or iSCSI (today).

3PAR 7200 and 7400 specs
HP 3PAR 7200 and 7400 specs

Being familiar with 3PAR, I wondered what they were going to do about the service processor in the P7000 series. Basically the service processor is a 1U appliance, which didn’t sit in the data path or control the data flow, but was used for monitoring and remote servicing. Guess what? That is now virtualized, and is packaged to run on vSphere 4.1, 5.0 or 5.1. Hyper-V support is coming in 2013. That’s pretty cool! You can still buy a hardware version of the SP, if you wish.

Reading through the 3PAR Software Products PDF, found here, I stumbled across some new features as well. “Persistent Ports” is a tier-1 data resiliency feature that allows non-disruptive software updates of the array, without requiring host multi-pathing software. Even if you have multi-pathing software, this enables faster/seamless fail-over since some multi-pathing software can take 10s of seconds to reconfigure. Basically controller node WWNs are shadowed on another node, through the use of NPIV, and can nearly instantly migrate so that the host doesn’t realize a node went offline.

Another new feature, which probably won’t be used by end users, is the addition of a REST-compliant Web Services API. Hopefully it will be used by VARs for better integration/management of 3PAR. Autonomic rebalance, which enables the array to non-disruptively re-distribute data across the array when new hardware is added, has been added. Also, for Windows Server 2012 Hyper-V customers, the 3.1.2 release of InForm OS will support ODX offload, which is Microsoft’s version of VMware VAAI with added goodies like space reclamation.

One good aspect of the HP acquisition of 3PAR, is the incorporation of easy setup tools that EVA customers may be used to using. HP also announced a migration path from EVA to 3PAR, that they say is even easier than migrating from one EVA to another EVA. Online data migration, and automatic thin-conversion! Unlike the big brother 10000 series, the P7000 models are self-install and have simple EVA-like initial setup. No professional services needed to get your P7000 up and running!

Ok, so what if you buy a P7200 today, and you have a storage explosion and need more space or even more host facing ports? 3PAR has “peer motion” which allows you to non-disruptively move LUNs to a new 3PAR array, such as a P10000. You would use the same management tools, and your servers wouldn’t even know the migration took place. Pretty cool!

Finally, another new feature due out in 1H 2013 is Priority Optimization. As shown in the HP slide below, you can guarantee IOPS and bandwidth at the tenant and application level. In multi-tenant environments, or where storage is shared between environment such as production and test/dev, this could be a great feature.

3PAR StoreServ 7000 QoS

Although not featured in great detail, HP is also releasing a Windows Server 2012 NAS front-end for 3PAR, which provides CIFS and NFS shares with data de-duplication. Microsoft has totally re-written their CIFS and NFS stacks to provide multi-10Gb speeds, and enhanced security like CIFS encryption. So don’t hold your nose at HP using Windows Server 2012, as it’s a huge departure from previous versions. Plus, you won’t have the potential integration hiccups of a third-party NAS device in Microsoft-centric environments that require high security.

In short, this is a great announcement and a big win for customers. EVA customers can finally get modern highly virtualized storage at a great price point, customers can non-disruptively migrate to the P10000s, and existing P10000 customers could use a P7x00 as a DR/BC target for cost effective storage.

As good as the 3PAR hardware is, HP could really invest a lot into an extreme make-over of their System Reporter (SR) and Recovery Manager for VMware products. EMC and other vendors have a leg up in this area, that would not take a great deal of money for HP to close up the gap. Thankfully the IMC, which you use to manage and provision storage on 3PAR, is extremely friendly to use and doesn’t suffer from the same issues as the other two software products. For SMB customers I think getting brand-new SR and Recovery Manager releases will be key for smooth adoption and good user experience.

NEW! HP Insight Control Storage Module for vCenter

For those of you using HP storage arrays (P2000, P4000, P6000, 3PAR, XP) HP has released a major update to their vCenter plug-in. You can now actively manage a wider range of HP arrays, you get full vSphere 5.1 web client capabilities, and expanded VASA support. You can check out a brochures about it here. For additional details and download links, you can view their HP Insight Control page here.

Features and Benefits

  •   Monitor and manage the physical/virtual relationships between the VMware Virtual Machines, ESX servers and the HP arrays:
    • Map VMware/virtual environment and provide detailed information about the HP storage being used
    • Create/Expand/Delete datastores on HP arrays
    • Create a Virtual Machine from a template on HP arrays
    • Delete an unassigned volume
  • Includes support for vSphere Storage APIs for Storage Awareness (VASA), which enables vCenter to query and capture the capabilities of HP arrays and make use of the new vSphere 5 Storage DRS and Profile-Driven Storage features
  • Available for the following arrays: 3PAR, MSA, EVA, XP, and LeftHand including the Virtual SAN Appliance.
New Features in v7.1
  • Full Interoperability with VMware’s latest version — vSphere 5.1.
  • New Web Client Graphical User Interface introduced in vCenter with vSphere 5.1.
  • Portlet based overview and detailed table dashboards for HP Server, HP Infrastructure and HP Storage
  • Active management for 3PAR arrays, including the ability to perform storage provisioning operations such as adding a new datastore, deleting or expanding an existing datastore, creating new VMs from a template, and cloning existing VMs.
  • In context launch of HP management ecosystem tools (iLO, Onboard Administrator, Virtual Connect, HP Storage Management)
  • One installer delivers both vCenter .Net Client Graphical User Interface and the New Web Client Graphical User Interface
Overall this looks like a great release, and is great to see the feature enhancements. Prior versions may have been somewhat feature lacking, so the full support for vSphere 5.1 and active management of most HP arrays is quite welcomed indeed. 

Are VMFS and Datastores going the way of the dodo bird?

At VMworld 2012 San Francisco there was some information publically shared about “vVols” (VMware Virtual Volumes) which is an entirely new and radical concept for VM storage. At VMworld 2012 Barcelona there seems to be a lot more talk about it, as major vendors are now blogging about their future support for vVols and their benefits.

vVols will entirely replace the datastore concept (for both NFS and block storage) and VMFS with what I would call VM-aware storage. The VM now becomes an object that the storage array understands and can apply policies to such as snapshots, replication, and SLAs against. You can manage capacity through capacity pools. Capacity pools can span storage chassis or even datacenters.

No more deciding how big or how many datastores you need to create. No more storage vMotioning a VM to another datastore because you are running low on space. No more wondering how many VMs you can place on a VMFS datastore before you run into contention issues. No more datastore clusters. No more VMFS!

This has the potential to really change how you view and consume storage in a VMware environment. It also will also impact how you do backups, disaster recovery, and manage your storage on a day-to-day basis. In fact, storage should take less management. This also combines the benefits of NFS and block storage into a single way to communicate to the array.

For additional details from various vendors and VMware, check out these links:

HP vVol Demo with 3PAR
VMware Blog/Video on vVols
EMC VPLEX and VMAX vVol Demo
IBM XIV vVol Discussion
Duncan Epping on vVols
Erik Zandoer on vVols
Julian Wood on vVols
Stephen Foskett on vVols
VMworld 2011 EMC/vVol Preview Demo
LogicalBlock on vVols

When will VMware release this technology? Who knows, but my bet is on the next major release of vSphere, probably due out the end of 2013, if they stick to their yearly releases. vSphere 6.0?

HP 3PAR announces next generation array…P10000

Last week HP announced their new 3PAR P10000 array, which is the next generation virtualization array. It features a number of enhancements over their previous generation, released when 3PAR was still independent. Today at VMware vmworld HP had an unveiling of the P10000 and former 3PAR CEO David Scott was present. David Scott is now in charge of the HP StorageWorks division, if you didn’t know.

Some of the enhancements of the v-Class include:

  • New 4th Generation 3PAR ASIC which performs much of the controller magic. Each controller node has two ASICs, which triple the bandwidth over the previous T-class controllers.
  • Support new host-facing connectivity to include FCoE, 10Gb iSCSI, and 8Gb FC. No more PCI-X slots!
  • HP Peer Motion to move LUNs non-disruptively between arrays.
  • Redesigned cabinet where all cabling now comes out of the rear of the cabinet for better cable management.
  • Greatly increased transactional and sequential throughput.
  • Future support for SAS connectivity.
  • Major new software version, 3.1.1, which will support the new vSphere 5.0 VAAI and VASA extensions.

I was lucky enough after the unveiling to meet David Scott and talk with him for a few minutes, as a happy customer of 3PAR customer. I look forward to the 3.1.1 release on our T400, which will play very nicely with vSphere 5.0 and provide even better storage support.

3PAR vSphere VAAI "XCOPY" Test Results: More efficient but not faster

In my previous blog I discussed how the VMware 4.1 VAAI ‘write same’ implementation in a 3PAR T400 showed a dramatic 20x increase in performance, creating an eager zeroed thick VMDK at 10GB/sec (yes, GigaBYTES a second). The other major SCSI primitive that VAAI 4.1 leverages is XCOPY (SCSI opcode 0x83). What this does is basically offload the copy process of a VMDK to the array, so all of the data does not need to traverse your SAN or bog down your ESX host.

In this test I used the same configuration as described in my previous blog entry. I decided to perform a storage vMotion of a large VM. This VM had three VMDKs attached, to simulate real world usage. The first VMDK was 60GB and had about 5GB of operating system data on it. The next two VMDKs were eager zerod thick disks, 70GB and 240GB, and had no user data written to them. Total VMDK size is 370GB. I initiated a storage vMotion process from vCenter 4.1 to start the copy process.

“XCOPY” without VAAI:
Host CPU Utilization: ~3916 MHz
Read/write latency: 3-4ms
3PAR host facing port aggregrate throughput: 616MB/sec
3PAR back-end disk port aggregrate throughput: ~0MB/sec
Time to complete: 20 minutes

These results are very reasonable, and quite expected. Since VAAI was not used, the ESXi host has to read 370GB of data, then turn it right around and write 370GB of data to the disk. So in reality over 740GB of data traversed the SAN during the 20 minute storage vMotion process. Since the VMDKs only contained 1% written data, back-end disk throughput was nearly zero because of the ASIC zero detection feature. If the VMDKs were fully populated then the back-end ports would be going crazy and the copy would be slower since all I/Os would be hitting physical disks.

“XCOPY” with VAAI:
Host CPU Utilization: ~3674 MHz
Read/write latency: 3-4ms
3PAR host facing port aggregrate throughput: ~0MB/sec
3PAR back-end disk port aggregrate throughput: ~0MB/sec
Time to complete: 20 minutes

Now I’m pretty surprised at these results, and not in a positive fashion. First, it’s good to see nearly zero disk I/O on the host facing ports and the back-end ports. This means in fact VAAI commands were being used, and that the VMDKs were nearly all zeros. However what has me very puzzled is that the copy process took exactly the same amount of time to complete, and used nearly the same amount of host CPU. I repeated the tests several times, and each time I got the exact same results…20 minutes.

Since there’s virtually no physical disk I/O going on here, I would expect a dramatic increase in storage vMotion performance. Because these results are very surprising and unexpected, I contacted 3PAR and I will see if engineering can shed some light on this situation. Other vendors claim a 25% increase in storage vMotion performance when using VAAI. Clearly 0% is less than 25%. When I get clarification on what’s going on here, I will be sure to follow up.

Update: 3PAR got back to me about my observations, and confirmed what I’m seeing is correct. With firmware 2.3.1 MU2 XCOPY doesn’t reduce the observed wall clock time to “copy” empty space in a thinly provisioned volume. But as I noted, XCOPY does leverage the zero detection feature of their ASIC so there’s very little back-end I/O occuring for non-allocated chunklets.

So yes the current VAAI implementation reduces the I/O strain on the SAN and disk array, but doesn’t reduce the observed time to move the empty chunklets. In my environment the I/O loads are pretty darn low, so I’d prefer the best of both worlds…efficient copies and reduced observed copy times. If 3PAR could make the same dramatic performance gains of the ‘write same’ command for the XCOPY command, that would really be a big win for customers.

3PAR vSphere VAAI "Write Same" Test Results: 20x performance boost

So in my previous blog entry I wrote about how I upgraded a 3PAR T400 to support the new VMware vSphere 4.1 VAAI extensions. I did some quick tests just to confirm the array was responding to the three new SCSI primitives, and all was a go. But to better quantify the effects of VAAI I wanted to perform more controlled tests and share the results.

Environment
First let me give you a top level view of the test environment. The host is an 8 core HP ProLiant blade server with a dual port 8Gb HBA, dual 8Gb SAN switches, and two quad port 4Gb FC host facing cards in the 3PAR (one per controller). The ESXi server was only zoned to two ports on each of the 4Gb 3PAR cards, for a total of four paths. The ESXi 4.1 Build 320092 server was configured with native round robin multi-pathing. The presented LUNs were 2TB in size, zero detect enabled, and formatted with VMFS 3.46 and using an 8MB block size.

Testing Methodology
My testing goal was to exercise the XCOPY (SCSI opcode 0x83) and write same (SCSI opcode 0x93). To test the write same extension, I wanted to create large eager zeroed disks, which forces ESXi to write all zeros to the entire VMDK. Normally this would take a lot of SAN bandwidth and time to transfer all of those zeros. Unfortunately I can’t provide screen shots because the system is in production, so you will have to take my word for the results.

“Write Same” Without VAAI:
70GB VMDK 2 minutes 20 seconds (500MB/sec)
240GB VMDK 8 minutes 1 second (498MB/sec)
1TB VMDK 33 minutes 10 seconds (502MB/sec)

Without VAAI the ESXi 4.1 host is sending a total 500MB/sec of data through the SAN and into the 4 ports on the 3PAR. Because the T400 is an active/active concurrent controller design, both controllers can own the same LUN and distribute the I/O load. In the 3PAR IMC (InForm Management console) I monitored the host ports and all four were equally loaded around 125MB/sec.

This shows that round-robin was functioning, and highlights the very well balanced design of the T400. But this configuration is what everyone has been using the last 10 years..nothing exciting here except if you want to weight down your SAN and disk array with processing zeros. Boorrrringgg!!

Now what is interesting, and very few arrays support, is a ‘zero detect’ feature where the array is smart enough on thin provisioned LUNs to not write data if the entire block is all zeros. So in the 3PAR IMC I was monitoring the back-end disk facing ports and sure enough, virtually zero I/O. This means the controllers were accepting 500MB/sec of incoming zeros, and writing practically nothing to disk. Pretty cool!

“Write Same” With VAAI: 20x Improvement
70GB VMDK 7 seconds (10GB/sec) 
240GB VMDK 24 seconds (10GB/sec)
1TB VMDK 1 minute 23 seconds (12GB/sec)

Now here’s where your juices might start flowing if you are a storage and VMware geek at heart. When performing the exact same VMDK create functions on the same host using the same LUNs, performance was increased 20x!! Again I monitored the host facing ports on the 3PAR, and this time I/O was virtually zero, and thanks to zero detection within the array, almost zero disk I/O. Talk about a major performance increase. Instead of waiting over 30 minutes to create a 1TB VMDK, you can create one in less than 90 seconds and place no load on your SAN or disk array. Most other vendors are only claiming up to 10x boost, so I was pretty shocked to see a consistent 20x increase in performance.

In conclusion I satisfied myself that 3PAR’s implementation of the “write same” command coupled with their ASIC based zero detection feature drastically increases creation performance of eager zeroed VMDK files. Next up will be my analysis of the XCOPY command, which has some interesting results that surprised me.

Update: I saw on the vStorage blog they did a similar comparison on the HP P4000 G2 iSCSI array. Of course the array configuration can dramatically affect performance, so this is not an apples to apples comparison. But nevertheless, I think the raw data is interesting to look at. For the P4000 the VAAI performance increase was only 4.4x better, not the 20x of the 3PAR. In addition, the VDMK creation throughput is drastically slower on the P4000.

Without VAAI:
T400 500MB/sec vs P4000 104MB/sec (T400 4.8x faster)

With VAAI:
T400 10GB/sec vs P4000 458MB/sec (T400 22x faster)

3PAR VAAI Upgrade is cakewalk

For those of you using vSphere 4.1, one of the cool new features is VAAI support. What is VAAI? VAAI is a deep level of integration between select storage arrays and the ESX kernel. The three VAAI functions released in 4.1 are:

•Atomic Test & Set (ATS), which is used during creation of files on the VMFS volume
•Clone Blocks/Full Copy/XCOPY, which is used to copy data
•Zero Blocks/Write Same, which is used to zero-out disk regions

Arrays need firmware updates to support these enhanced SCSI commands. Since vSphere 4.1 was released storage vendors have been releasing firmware updates for their arrays. Today I upgraded our 3PAR T400 to their 2.3.1 MU2 code base, which has VAAI support. Like I blogged about back in February, the 3PAR upgrades are fully non-disruptive, fairly straight forward, and not so complicated they need professional services.

I found a script which makes the verification, enabling, and disabling of the features a simple one liner, and it can be found here. For a little trivia, there was supposed to be fourth VAAI SCSI primitive, ‘thin provision stun’. I bet a Star Trek fan came up with that feature name. Basically this feature enabled the array to tell a VM that it ran out of physical disk space on the LUN and ESX would ‘stun’ the VM so it wouldn’t crash or corrupt data. But as the rumor goes, there was some miscommunication between VMware and various partners so not all partners implemented or certified the stun primitive. To put everyone on a level playing field the fourth primitive was dropped. I would expect it to make an appearance in a future release.

Due to time constraints and the approaching weekend, I didn’t have time to run any vSphere tests and look at SCSI stats to verify the VAAI commands are working. That will come over the next week or two, and I plan to blog on the results.

For those of you looking at buying new storage arrays and using them with VMware, one of the basic checklist features you should use as screening criteria is VAAI support. Finally, NetApp has a great PDF that goes into good details on how VAAI works and the use cases. While it contains some NetApp specific information, the majority of the document is a good read for anyone interested in VAAI.