vSphere 5.0 Virtual Storage Appliance

One of the new features of vSphere 5.0 is a VMware VSA, or virtual storage appliance. VSAs are nothing new, as HP and FalconStor have offered VSAs for vSphere for a number of years. VSAs work by using DAS (direct attached storage, e.g. SAS or SATA) and turn it into shared storage that enables HA features like storage vMotion, HA, FT and vMotion. Of course the primary reason to do this is cost. If you are a SMB or have a remote office, you can deploy a VSA for less money than a physical iSCSI SAN.

Basically you can install the VSA on two to three servers (one server configuration is NOT supported), and it will pool their storage using local RAID, and do network RAID across the physical servers. VMware claims 99.9% availability using vSphere HA. It also has tight integration with vCenter, so you can manage it in a single pane of glass, which is pretty cool.

The VSA is separately licensed, and not included in any vSphere edition. Each instance supports up to three nodes. However, vCenter will only support one VSA instance. So if you have a lot of remote offices and want to use VMware VSAs at them, you really won’t be able to do that. You would need to look at alternatives like HP. List price of the VMware VSA is 5,995 per server. You can also buy it with the vSphere 5 Essentials Plus SKU for a total of $7,995 for a limited time.

It is interesting to see VMware now directly competing with partners such as HP for storage business. The P4000 VSA is very feature rich, just like the physical P4xxx servers and include VAAI support. The VMware VSA v1.0 only supports NFS, so you don’t get any VAAI 1.0 features that you do with the P4000 VSA since it’s iSCSI based. You do get NFS storage I/O control, which is new to vSphere 5.0. The VMware VSA also will have a separate HCL, and pretty short at GA, but VMware says the list will rapidly expand as partners validate the solution.

During the Q&A of the live session it was a bit unclear how VMware calculates usable capacity of the VSA. So stay tuned for more details, once I find more information on the subject. Basically there’s a combination of RAID 10 and RAID 5 going on to provide solid data protection in the case of disk or node failure.

As a side note, some SKUs of the HP P4500 physical arrays come bundled with 10 VSA licenses that support up to 10TB each. And there’s no vCenter limitation of the number of P4000 VSAs you can use, so that becomes an excellent branch office solution which can scale up very nicely.

You can buy a P4500 model BQ888A which includes the 10 VSA licenses for $43K, so in essence you pay $4.3K for each VSA and get a free 14TB hardware SAS iSCSI array. The VMware pricing reinforces that the VSA is really for Essentials Plus customers, which probably wouldn’t pay $43K for a hardware iSCSI array.

3PAR vSphere VAAI "Write Same" Test Results: 20x performance boost

So in my previous blog entry I wrote about how I upgraded a 3PAR T400 to support the new VMware vSphere 4.1 VAAI extensions. I did some quick tests just to confirm the array was responding to the three new SCSI primitives, and all was a go. But to better quantify the effects of VAAI I wanted to perform more controlled tests and share the results.

Environment
First let me give you a top level view of the test environment. The host is an 8 core HP ProLiant blade server with a dual port 8Gb HBA, dual 8Gb SAN switches, and two quad port 4Gb FC host facing cards in the 3PAR (one per controller). The ESXi server was only zoned to two ports on each of the 4Gb 3PAR cards, for a total of four paths. The ESXi 4.1 Build 320092 server was configured with native round robin multi-pathing. The presented LUNs were 2TB in size, zero detect enabled, and formatted with VMFS 3.46 and using an 8MB block size.

Testing Methodology
My testing goal was to exercise the XCOPY (SCSI opcode 0x83) and write same (SCSI opcode 0x93). To test the write same extension, I wanted to create large eager zeroed disks, which forces ESXi to write all zeros to the entire VMDK. Normally this would take a lot of SAN bandwidth and time to transfer all of those zeros. Unfortunately I can’t provide screen shots because the system is in production, so you will have to take my word for the results.

“Write Same” Without VAAI:
70GB VMDK 2 minutes 20 seconds (500MB/sec)
240GB VMDK 8 minutes 1 second (498MB/sec)
1TB VMDK 33 minutes 10 seconds (502MB/sec)

Without VAAI the ESXi 4.1 host is sending a total 500MB/sec of data through the SAN and into the 4 ports on the 3PAR. Because the T400 is an active/active concurrent controller design, both controllers can own the same LUN and distribute the I/O load. In the 3PAR IMC (InForm Management console) I monitored the host ports and all four were equally loaded around 125MB/sec.

This shows that round-robin was functioning, and highlights the very well balanced design of the T400. But this configuration is what everyone has been using the last 10 years..nothing exciting here except if you want to weight down your SAN and disk array with processing zeros. Boorrrringgg!!

Now what is interesting, and very few arrays support, is a ‘zero detect’ feature where the array is smart enough on thin provisioned LUNs to not write data if the entire block is all zeros. So in the 3PAR IMC I was monitoring the back-end disk facing ports and sure enough, virtually zero I/O. This means the controllers were accepting 500MB/sec of incoming zeros, and writing practically nothing to disk. Pretty cool!

“Write Same” With VAAI: 20x Improvement
70GB VMDK 7 seconds (10GB/sec) 
240GB VMDK 24 seconds (10GB/sec)
1TB VMDK 1 minute 23 seconds (12GB/sec)

Now here’s where your juices might start flowing if you are a storage and VMware geek at heart. When performing the exact same VMDK create functions on the same host using the same LUNs, performance was increased 20x!! Again I monitored the host facing ports on the 3PAR, and this time I/O was virtually zero, and thanks to zero detection within the array, almost zero disk I/O. Talk about a major performance increase. Instead of waiting over 30 minutes to create a 1TB VMDK, you can create one in less than 90 seconds and place no load on your SAN or disk array. Most other vendors are only claiming up to 10x boost, so I was pretty shocked to see a consistent 20x increase in performance.

In conclusion I satisfied myself that 3PAR’s implementation of the “write same” command coupled with their ASIC based zero detection feature drastically increases creation performance of eager zeroed VMDK files. Next up will be my analysis of the XCOPY command, which has some interesting results that surprised me.

Update: I saw on the vStorage blog they did a similar comparison on the HP P4000 G2 iSCSI array. Of course the array configuration can dramatically affect performance, so this is not an apples to apples comparison. But nevertheless, I think the raw data is interesting to look at. For the P4000 the VAAI performance increase was only 4.4x better, not the 20x of the 3PAR. In addition, the VDMK creation throughput is drastically slower on the P4000.

Without VAAI:
T400 500MB/sec vs P4000 104MB/sec (T400 4.8x faster)

With VAAI:
T400 10GB/sec vs P4000 458MB/sec (T400 22x faster)

Don’t partition your ESX LUNs!

Under some circumstances you may feel the need to partition an ESX LUN, so you can create two or more datastores. Why? Let’s say you have a small branch office with a single ESX host and it’s only using internal storage from a RAID controller. And for some reason you need two datastores. Maybe it’s to meet security requirements, or data separation requirements. Maybe you can’t store your VMs and templates on the same datastore because it’s against security policy.

Given these circumstances, you may be attempted to use traditional disk partitions and then format each with VMFS to get your two or more datastores. Bzzzt! Under no circumstances does VMware support a single LUN which is partitioned and has two or more datastores. Why? It’s pretty simple actually. ESX utilizes SCSI reservations to lock a LUN and update disk metadata. During this lock, VMs can’t write to the disk. If you have two or more datastores on a single LUN, the reservations can step on each other and create problems. Expect problems!

Don’t try and be creative and pre-partition a LUN prior to installing ESX. Also, don’t try to add a new datastore and not allocate all of the capacity in the hopes you can add another datastore in the free space. Impossible.

Solutions? Well, if your server has multiple drives then configure independent RAID arrays. For example, if your server has four drives, create two RAID-1 mirrors. This gives you two LUNs, and two datastores. Alternatively, use external shared disk storage which supports multiple datastores like NFS or iSCSI.

If you have two hosts, then you could use something like the HP LeftHand iSCSI virtual storage appliance which mirrors storage between the two nodes and then present multiple iSCSI datastores to each ESX host. This could have other uses as well, such as disaster recovery, as the Lefthand VSA supports VMware Site Recovery Manager (SRM) and remote data replication.

Cost effective VDI for remote offices

During VMworld I attended a good session on VDI (virtual desktop infrastructure) for remote users, presented by HP. To support ROBOs (remote office/branch office) there are two basic architectures.

The first is a centralized approach, where one or two datacenters host all the required servers and storage to support all VDI users. This is great for centralized administration, protecting data inside the datacenter, and minizing costs by pooling servers and storage. However, users are then dependent on the WAN and bandwidth/latency/jitter are can be big factors. If the ROBO has POS (point of sale) or other critial users, it’s totally unacceptable to be dependent on the WAN. In addition, existing VDI technology is generally not as robust across the WAN and has limited support for 3D graphics, video, CAD/CAM, etc.

The second approach is a distributed model where each ROBO has their own self-sufficient set of servers and storage, which is not dependent on the WAN. One major barrier to this deployment is the cost of shared storage if you want to use any of the HA features of ESX such as vMotion, HA, FA, DRS, DPM, etc. Even a small 10TB iSCSI array can cost $35K or more, which is not cost effective to support a handful of users.

To address the shared storage cost, the speaker presented a novel idea. HP’s Lefthand iSCSi array can be purchased as a virtual appliance. List price for the virtual appliance is less than $5K. What is the Lefthand iSCSI VSA? It’s a VM which runs on top of ESX and turns local non-shared disk storage into shared iSCSI storage. You can cluster the VSAs across multiple physical hosts for increased performance and redundancy. Lefthand is also compatible with ESX SRM, and supports WAN data replication, snapshots, and thin provisioning.

With this architecture, a small ROBO could have two or three ESX hosts, clustered VSAs, and fully support vMotion, HA, FT, DRS, and DPM for all VDI users. For disaster recovery, you could also implement SRM.

For small remote offices, this approach could be very appealing and really cut down on the cost per user of VDI. Since the VDI servers are local to the user, you would get the best performance and richest desktop experience. Rack space and power requirements are also reduced, since no external storage array is needed.

© 2017 - Sitemap