Archives for July 2011

Impending VMware vSphere 5.0 license changes?

Update: VMware made an official announcement on August 3, 2011 and I’ve covered it in detail here.

Since the announcement of vSphere 5.0, and the new licensing terms, much of the focus on the launch has sadly not been around the great new features but the changes in licensing terms. In fact, the many loyal customers in this thread on VMware forums are threatening to, or actually looking at alternatives, such as XenServer, Hyper-V, or KVM.

I have it on good authority that VMware is taking these complaints seriously and next week will announce some changes to address the situation.

The rumored changes may include:

1. Doubling the vRAM entitlements for Enterprise and Enterprise Plus editions to 64GB and 96GB respectively. For example, a dual socket Enterprise Plus server would add 192GB of vRAM to your pool.

2. Essentials and Essentials Plus vRAM entitlement increased from 24GB to 32GB.

3. Capping the vRAM amount that counts against your licensed pool to 96GB per VM, even if the VM is allocated more, such as 1TB. Drops the cost of a 1TB VM from $75K to $3.4K for Enterprise Plus pools.

4. Licensing high water marks will be captured on a yearly basis vice a monthly basis.

Customers that have an ELA (enterprise licensing agreement) may be able to negotiate better terms, and should certainly try to do so with their rep if they feel the new licensing scheme will cost them additional dollars. Of course the details may change before the announcement, so take this information with a pinch of salt until something official comes out, probably next week.

Will this make everyone happy? No, but it is probably a good compromise and shows VMware does take feedback seriously. When the official changes are announced, the community scripts that have been floating around to estimate the licensing impact will need to be updated.

vSphere 5.0 VDI Licensing Changes

Among the various license changes in vSphere 5.0, the way you can license VDI (virtual desktop infrastructure) has also changed, or rather, now gives you more options. Specifically, if you are a XenDesktop or non-View customer, keep reading. If you use VMware View, then it’s pretty much status quo from my understanding. You should note that View licenses are excluded from the vRAM entitlement issues.

I touched on the new VDI licensing option in a previous post here, but I think a dedicated post to VDI with more clarity is warranted. You can also check out that link for more general vSphere 5.0 licensing changes and the vRAM entitlement issues. The official VMware End User Computing web page with more details can be seen here.

vSphere 4.x:

  • Utilize standard vSphere per-socket license for a ESXi host and you could mix and match VDI and server workloads at you wished. You could use any ESXi edition that matched the feature set you were looking for.
  • Purchase a VMware View Enterprise license for $150/concurrent user. This eliminates the per-socket ESXi host license, but the VDI hosts can only run the client OS VMs or server VMs that directly support VDI, such as the brokers. No non-VDI VMs are allowed on the VDI hosts. Concurrent user means a user logged into the client VM via the broker, not concurrently powered on client VMs.

You still need a XenDesktop license in either case since you are only using the View license to legally operate your hypervisor. So VMware isn’t cutting you any breaks, and could be viewed as unfairly taxing non-View customers if they wanted a concurrent user model. Likely not the route you would want to go, but that would depend on your usage model.

vSphere 5.0:

  • Utilize standard vSphere 5.0 per-socket/vRAM license for ESXi hosts and you can mix and match VDI and server workloads as you wish. You can use any ESXi edition that matches the feature set you are looking for. You are now subject to the vRAM entitlement limitations. In VDI where you likely have scale-up hosts with lots of memory, you may need to purchase more socket licenses. Do very careful calculations before you jump on the v5.0 bandwagon.

  • Purchase a VMware View Enterprise license for $150/concurrent user. This eliminates the per-socket ESXi host license, but the VDI hosts can only run the client OS VMs or server VMs that directly support VDI, such as the brokers. No non-VDI VMs are allowed on the VDI hosts. You are excluded from the vRAM entitlement limitations on the ESXi hosts that support VDI. This now may become more palatable given the unlimited vRAM feature.

  • (New) Buy vSphere 5.0 Desktop packs, which are sold in bundles of 100 VMs, for $6500 each, or $65 per VM. This entitles you to power on a desktop class OS on an ESXi host without any vRAM limitations. This is a per powered on client VM, regardless if anyone is using it or not. You are strictly precluded from running any server OS VMs on these hosts, including any related to VDI that are allowed under the concurrent license model.

  • (Future??) VMware may release a non-View concurrent user license that costs less than $150 and gives you the same advantage of the View concurrent model, without the rights to use View. No information on possible price, or release date, if this happens at all.

If you elect to use the concurrent model for either version, and don’t use View, you need to manually keep track of concurrent user usage through the broker of your choice. Should the VMware license police come around you should have some documentation to show you are in compliance and not cheating. The licensing guy I talked to didn’t know how the new per-VM reporting worked and if vCenter would refuse to power on a client VM that violated the licensing maximum, or just nag you.

Also note that the concurrent user license and the new vSphere Desktop license entitle you to the functionality of ESXi Enterprise Plus for the hypervisor. So that’s a good deal, which lets you take advantage of all hypervisor features for your VDI.

Depending on your consolidation ratios, usage patterns, number of users, etc. you have a variety of licensing options. There’s no one size fits all solution here, but for new vSphere 5.0 customers the Desktop SKU looks like the best solution except if you can live with ESXi standard edition. Also remember that you can’t convert an existing vSphere 4.x license to a 5.0 vSphere for Desktop SKU, since it doesn’t exist in 4.0. So if you have an existing VDI deployment on vSphere 4.x, you have some tough decisions to make given the new vRAM considerations for the per-socket license.

Brian Madden did a great article with a very detailed analysis of the new vSphere Desktop SKU and desktops with various memory configurations. You can check out his article here. Based on his spreadsheet I did the same calculations as my previous licensing blog (1800 users, 23 dual-core hosts)and got the same results, just in a prettier format:

What’s clear from these numbers is that with vSphere 5.0 your minimum VDI cost jumped from $46K to $112K (245% increase) for 1800 users, running the exact same edition of ESXi. For $5K more you can utilize the enterprise plus edition of ESXi (via the desktop SKU) which has vastly more features, although not all are really needed for VDI.
On the flip side, if you were going to run VDI on enterprise plus with v4.x (maybe you wanted to use the Nexus 1000v) and had not yet bought those licenses, you can now run those cheaper on v5.0 IF you buy the new 5.0 desktop SKU. Unfortunately if you already have VDI deployed on v4.x, then you are left with an increased bill, assuming no excess capacity elsewhere in your environment to offset the usage.

vSphere 5.0 Licensing Estimation Scripts

Wow, the changes VMware made to vSphere 5.0 have really stirred up a lot of passion on the subject from angry customers. Is all of it justified? Probably not all, but certainly some customers will be required to purchase additional licenses for their existing environment when they upgrade to 5.0. But to help take some of the emotion out of the discussion, you first need to see what YOUR environment looks like. Since VMware hasn’t yet release their vRAM reporting tool a few VMware community members wrote some scripts to help people out. You will either be pleased by the results, or be stuck with a bill come upgrade time.

You can find a compilation of the scripts here. Users are posting results of their environment..so continue to watch the thread even after you download the scripts. There is also another license validator script that you can read about here.

SQL 2008 R2 SP1 hits the streets

In the midst of all the uproar about the vSphere 5.0 licensing changes, I missed the fact that Microsoft released SQL 2008 R2 Service Pack 1 yesterday. You can download it here. The master list of bug fixes can be seen here. The new features are listed below. For the full release notes, click here.

  • Dynamic Management Views for increased supportability. sys.dm_query_stats DMV is extended with additional columns to improve supportabilities over troubleshooting long-running queries. New DMVs and XEvents on select performance counters are introduced to monitor OS configurations and resource conditions related to the SQL Server instance.

  • ForceSeek for improved querying performance. Syntax for FORCESEEK index hint has been modified to take optional parameters allowing it to control the access method on the index even further. Using old style syntax for FORCESEEK remains unmodified and works as before. In addition to that, a new query hint, FORCESCAN has been added. It complements the FORCESEEK hint allowing specifying ‘scan’ as the access method to the index. No changes to applications are necessary if you do not plan to use this new functionality.
  • Data-tier Application Component Framework (DAC Fx) for improved database upgrades. The new Data-tier Application (DAC) Framework v1.1 and DAC upgrade wizard enable the new in-place upgrade service for database schema management. The new in-place upgrade service will upgrade the schema for an existing database in SQL Azure and the versions of SQL Server supported by DAC. A DAC is an entity that contains all of the database objects and instance objects used by an application. A DAC provides a single unit for authoring, deploying, and managing the data-tier objects. For more information, see Designing and Implementing Data-tier Applications.
  • Disk space control in PowerPivot for SharePoint. This update introduces two new configuration settings that let you determine how long cached data stays in the system. In the new Disk Cache section on the PowerPivot configuration page, you can specify how long an inactive database remains in memory before it is unloaded. You can also limit how long a cached file is kept on disk before it is deleted.
  • Support for 512e Drives. SQL Server now correctly detects and supports hard drives with the new 512e format. These drives report 512 byte logical sector sizes, but they are formatted internally using 4KB sectors. When SQL Server 2008 R2 SP1 is installed on Windows Server 2008 R2 or higher, we will correctly detect these drives and adjust automatically.

vSphere 5.0 Storage Improvements

If you a regular follower of my blog, you will probably notice I’m a bit of a storage geek. VAAI, FCoE, WWNs, WWPNs, VMFS, VASA and iSCSI are all music to my ears. So what’s new to vSphere 5.0 storage technologies? A LOT. That team must have been working over time to come up with all these great new features. Here’s a list of the high level new features, gleaned from a great VMware whitepaper that I have a link to at the end of this post.

VMFS 5.0

  • 64TB LUN support (with NO extents), great for arrays that support large LUNs like 3PAR.
  • Partition table automatically migrated from MBR to GPT, non-disruptively when grown above 2TB.
  • Unified block size of 1MB. No more wondering what block size to use. Note that upgraded volumes retain their previous block size so may want to reformat old LUNs that don’t use 1MB blocks. I use 8MB blocks, so I’ll need to reformat all volumes.
  • Non-disruptive upgrade from VMFS-3 to VMFS-5
  • Up to 30,000 8K sub-blocks for files such as VMX and logs
  • New partitions will be aligned on sector 2048
  • Passthru RDMs can be expanded to more than 60TB
  • Non-passthru RDMs are still limited to 2TB – 512 bytes

There are some legacy hold-overs if you upgrade a VMFS-3 volume to VMFS 5.0, so if at all possible I would create fresh VMFS-5 volumes so you get all of the benefits and optimizations. This can be done non-disruptively with storage vMotion, of course. VMDK files still have a maximum size of 2TB minus 512 bytes. And you are still limited to 256 LUNs per ESXi 5.0 host.

Storage DRS

  • Provides smart placement of VMs based on I/O and space capacity.
  • A new concept of a datastore cluster in vCenter aggregates datastores into a single unit of consumption for the administrator.
  • Storage DRS makes initial placement recommendations and ongoing balancing recommendations, just like it does for compute and memory resources.
  • You can configure storage DRS thresholds for utilized space, I/O latency and I/O imbalances.
  • I/O loads are evaluated every 8 hours by default.
  • You can put a datastore in maintenance mode, which evacuates all VMs from that datastore to the remaining datastores in the datastore cluster.
  • Storage DRS works on VMFS and NFS datastores, but they must be in separate clusters.
  • Affinity rules can be created for VMDK affinity, VMDK anti-affinity and VM anti-affinity.

Profile-Driven Storage

  • Allows you to match storage SLA requirements of VMs to the right datastore, based on discovered properties of the storage array LUNs via Storage APIs.
  • You define storage tiers that can be requested as part of a VM profile. So during the VM provisioning process you are only presented with storage options that match the defined profile requirements.
  • Supports NFS, iSCSI, and FC
  • You can tag storage with a description (.e.g. RAID-5 SAS, remote replication)
  • Use storage characteristics or admin defined descriptions to setup VM placement rules
  • Compliance checking

Fibre Channel over Ethernet Software Initiator

  • Requires a network adaptor that supports FCoE offload (currently only Intel x520)
  • Otherwise very similar to the iSCSI software initiator in concept

iSCSI Initiator Enhancements

  • Properly configuring iSCSI in vSphere 4.0 was not as simple as a few clicks in the GUI. You had to resort to command line configuration to properly bind the NICs and use multi-pathing. No more! Full GUI configuration of iSCSI network parameters and bindings.

Storage I/O Control

  • Extended to NFS datastores (VMFS only in 4.x).
  • Complete coverage of all datastore types, for high assurance VMs won’t hog storage resources

VAAI “v2”

  • Thin provisioning dead space reclamation. Informs the array when a file is deleted or moved, so the array can free the associated blocks. Compliments storage DRS and storage vMotion.
  • Thin provisioning out-of-space monitors space usage to alarm if physical disk space is becoming low. A VM can be stunned if physical disk space runs out, and migrated to another datastore, then resume computing without a VM failure. Note: This was supposed to be in vSphere 4.1 but was ditched because not all array vendors implemented it.
  • Full file clone for NFS, enabling the NAS device to perform the disk copy internally.
  • Enables the creation of thick disk on NFS datastores. Previously they were always thin.
  • No more VAAI vendor specific plug-ins are needed since VMware enhanced the T10 standards support.
  • More use of the vSphere 4.1 VAAI “ATS” (atomic test and set) command throughout the VMFS filesystem for improved performance.

I’m excited about the dead space reclamation feature, however, there’s no mention of a tie-in with the guest operating system. So if Windows deletes a 100GB file, the VMFS datastore doesn’t know it, and the storage array won’t know it either so the blocks remain allocated. You still need to use a program like sdelete to zeroize the blocks so the array knows they are no longer needed. You can check out even more geeky details at Chad Sakac’s blog here.

Hopefully VMware can work with Microsoft and other OS vendors to add that final missing piece of the puzzle for complete end-to-end thin disk awareness. Basically the SATA “TRIM” command for the enterprise. Maybe Windows Server 2012 will have such a feature that VMware can leverage.

Storage vMotion

  • Supports the migration of VMs with snapshots and linked clones.
  • A new ‘mirror mode’, which enables a one pass block copy of the VM. Writes that occur during the migration are mirrored to both datastores before acknowledged to the OS.

 If you want to read more in-depth explanations of these new features, you can read the excellent “What’s New in VMware vSphere 5.0 – Storage” by Ducan Epping here.

vSphere 5.0 Virtual Storage Appliance

One of the new features of vSphere 5.0 is a VMware VSA, or virtual storage appliance. VSAs are nothing new, as HP and FalconStor have offered VSAs for vSphere for a number of years. VSAs work by using DAS (direct attached storage, e.g. SAS or SATA) and turn it into shared storage that enables HA features like storage vMotion, HA, FT and vMotion. Of course the primary reason to do this is cost. If you are a SMB or have a remote office, you can deploy a VSA for less money than a physical iSCSI SAN.

Basically you can install the VSA on two to three servers (one server configuration is NOT supported), and it will pool their storage using local RAID, and do network RAID across the physical servers. VMware claims 99.9% availability using vSphere HA. It also has tight integration with vCenter, so you can manage it in a single pane of glass, which is pretty cool.

The VSA is separately licensed, and not included in any vSphere edition. Each instance supports up to three nodes. However, vCenter will only support one VSA instance. So if you have a lot of remote offices and want to use VMware VSAs at them, you really won’t be able to do that. You would need to look at alternatives like HP. List price of the VMware VSA is 5,995 per server. You can also buy it with the vSphere 5 Essentials Plus SKU for a total of $7,995 for a limited time.

It is interesting to see VMware now directly competing with partners such as HP for storage business. The P4000 VSA is very feature rich, just like the physical P4xxx servers and include VAAI support. The VMware VSA v1.0 only supports NFS, so you don’t get any VAAI 1.0 features that you do with the P4000 VSA since it’s iSCSI based. You do get NFS storage I/O control, which is new to vSphere 5.0. The VMware VSA also will have a separate HCL, and pretty short at GA, but VMware says the list will rapidly expand as partners validate the solution.

During the Q&A of the live session it was a bit unclear how VMware calculates usable capacity of the VSA. So stay tuned for more details, once I find more information on the subject. Basically there’s a combination of RAID 10 and RAID 5 going on to provide solid data protection in the case of disk or node failure.

As a side note, some SKUs of the HP P4500 physical arrays come bundled with 10 VSA licenses that support up to 10TB each. And there’s no vCenter limitation of the number of P4000 VSAs you can use, so that becomes an excellent branch office solution which can scale up very nicely.

You can buy a P4500 model BQ888A which includes the 10 VSA licenses for $43K, so in essence you pay $4.3K for each VSA and get a free 14TB hardware SAS iSCSI array. The VMware pricing reinforces that the VSA is really for Essentials Plus customers, which probably wouldn’t pay $43K for a hardware iSCSI array.

vSphere 5.0 Licensing Changes

One of the major announcements today that has gotten customers worked up a little bit is the change in the licensing model with vSphere 5.0. In previous versions licenses were tied to the number CPU sockets, and the various editions had limitations on maximum physical memory supported and the number of processor cores. These limitations were spread over six major SKUs. With v5.0 that’s history, and you now need to keep track of what they call vRAM entitlements and CPU sockets. Oh ya, one SKU got dropped, Advanced edition.

Lifted are the limitations on CPU cores and maximum physical memory. Have a 256GB server with 32 cores? No problem, you can use Standard edition. Nifty? Well, remember the new vRAM licensing concept. This gets a bit confusing at first, so stick with me.

vRAM is the amount of consumed VM memory across the entire environment (within a given SKU, such as Enterprise edition). This not the amount of physical RAM, mind you, but the total amount of RAM allocated to all running VMs. For example, if you have 10 4GB VMs, that would be 40GB of vRAM. Transparent page sharing, ballooning, or other memory conservation features will not help you here.

Now the kicker is that each licensing SKU, such as Enterprise edition, come with a fixed amount of vRAM entitlements. For example, Enterprise edition has a 32GB per socket entitlement. If you have four dual socket servers then you get 256GB of vRAM entitlements (4 x 2 x 32GB). This means on the four physical servers all of your powered on VMs cannot use more than 256GB of RAM. VMware provided the slide below that compares vSphere 4.1 and 5.0 licensing.

Remember vRAM is a pooled asset, so vCenter will manage and track the pooled usage. Linked vCenter instances will pool their memory together. Pools are based on the license SKU, so you have separate pools for each of the five editions. Since it is a pooled asset, you can legally exceed the entitlement on one or more servers, as long as there is excess unused capacity on other servers in the same pool.

What I also found interesting is that VMware will not be selling vRAM only entitlement SKUs. The only way to buy more entitlements is to either upgrade to the next SKU level (.e.g. Enterprise to Enterprise Plus) or buy more socket licenses. According to VMware this change will only result in increased costs for 4% of customers. The VMware chart below shows the list price for the licenses, vRAM entitlements, and features.

How VMware thinks this makes licensing simpler is beyond me. This major licensing change will likely impact large scale designs. So servers like Cisco UCS or large HP blades that can support 384GB or more of RAM could require a lot of new licenses to fully utilize their memory.

Another interesting consideration is VDI, such as XenDesktop. Typically these servers have a lot of memory (96GB to 144GB), and are packed to the gills with running VMs so you can reduce the per-VM hardware costs. The sweet spot for VDI servers has been 2-socket servers packed with memory. For a large scale VDI deployments that has high concurrent usage, licensing could become complex to manage. VDI may not need fancy features like I/O control or storage DRS, so customers may look at lower SKU editions like standard edition. But with the lower vRAM entitlements, you really have to do some careful calculations and likely increase the number of licensed sockets to stay compliant.


Update 2: VMware has now announced their vSphere 5.0 Desktop license for VDI. Basically you license desktop VMs in bundles of 100 for $65 each. The host must be dedicated to VDI (no server VMs), and the ESXi functionality is that of enterprise plus, and NO vRAM entitlement limitations. See their blog post here. However, this is only good for NEW vSphere 5.0 Desktop licenses. Customers with existing vSphere enterprise plus licenses that are upgraded to 5.0 are still bound by the vRAM entitlement restrictions.

The calculations below were made prior to the new Desktop SKU. I am glad VMware realizes there are non-View VDI solutions and that customers needed a price break to make VDI more affordable. Thank you VMware!

Update 3: Brian Madden did a more exhaustive VDI cost calculation matrix in his post here.

Let’s take an example of ~1800 VDI users. Using current 12-core servers, you could probably get ~90 users per server with 144GB of RAM, allocating 1.5GB per VM. For 1800 users that equates to 20 servers, with no spare capacity. Let’s throw in three servers for extra capacity, for a total of 23 servers.

Standard:
vSphere 4.x: 23 x 2 x $995 = $45,770 (46 licenses)
vSphere 5.0: (1800 VMs x 1.5GB) / 24GB = $111,937 (112 licenses)
$/VDI VM = $25 vs. $62 (240% increase)

Enterprise:
vSphere 4.x: 23 x 2 x $2875 = $132,250 (46 licenses)
vSphere 5.0: (1800 VMs x 1.5GB) / 32GB = $244,375 (85 licenses)
$/VDI VM = $74 vs. $136 (84% increase)

Enterprise Plus:
vSphere 4.x: 23 x 2 x $3,495 = $160,770 (46 licenses)
vSphere 5.0: (1800 VMs x 1.5GB) / 48GB = $199,215 (57 licenses)
$/VDI VM = $89 vs. $111 (25% increase)

Of course these costs do not take into account other unused capacity in the environment, so this is ‘worst case’ pricing. But remember that vRAM entitlements are SKU specific. So if you use standard edition licenses for VDI, but enterprise plus for servers, you need to keep track of two separate vRAM pools. What’s interesting is that the enterprise edition costs substantially more per VDI VM with v5.0 ($136) than enterprise plus ($111), because of the vRAM entitlement differences.  VMware View users may not be hit as hard, but I’m not as familar with those licensing specifics as I support a XenDesktop environment.

As noted in a VMware FAQ, there is no “hard stop” if you reach the vRAM limit in the standard, enterprise and enterprise plus SKUs. There is a hard stop for the vCenter server for Essentials SKU. So in most editions vCenter will only nag you if you are out of compliance, but you won’t be left in the lurch unable to power on VMs. Of course you should adequately predict vRAM usage and purchase licenses in advance to stay ahead of the curve and be completely compliant.

It will be interesting to see how much customers push back on this new ‘simplified’ licensing model, and if VMware changes direction by the time it hits the streets in late Q3 2011. I think this will make customers look at alternatives such as XenServer and Hyper-V more closely. There’s a very active thread on VMware forums about the licensing changes, and its mostly shock and awe.

VMware vSphere 5.0 Announced!

In case you were living under a rock today, or don’t have lots of RSS subscriptions for virtualization blogs, you may not have heard that VMware announced their vSphere 5.0 product today. Although not shipping until late in Q3 of 2011, the cat is now out of the bag and technical details are abundant. This is a huge release with hundreds of new features and tweaks, so I’m sure the blogosphere will be crammed with great details over the coming months.

VMware had an online virtual product release with several webinars, live Twitter feeds and live Q&A. So in a series of posts I’ll just cover some of the very high level new features, so you get a feel of the magnitude of the updates and hopefully get you interested in reading more on your own.

A few of the major feature enhancements include:

  • Exclusive use of the ESXi hypervisor. No more ESX.
  • Auto Deploy. Uses host profiles to provide stateless computers with no local storage. Enables you to rapidly provision new servers and centralize patch management. No no longer really patch servers, you reboot the server and it will download a whole new image.
  • Storage DRS. Tiered storage based on performance characteristics. Load balance VMs based on I/O profile and align with SLAs. You can put a datastore in maintenance mode and all VMs will be vMotioned to other datastores.
  • Added support for NFS storage I/O control (previously limited to block storage)
  • Per-VM network I/O controls, to help eliminiate noisy neighbors.
  • VMs can now support 3D graphics
  • Supports client-connected USB devices
  • Support for USB 3.0
  • Supports smartcard readers
  • Mac OS X server support
  • Hardware VM version has been increased to v8.0 and EFI virtual BIOS
  • VM limits increased to 32 vCPUs, 1TB RAM, support 1,000,000 IOPS, >36Gb/s network throughput
  • Brand new HA architecture. Supports larger clusters, simplified setup, and more reliable.
  • vCenter appliance running on Linux. Only supports Oracle DBs. Didn’t VMware learn from vCloud? Not as full featured as the Windows version.
  • Brand new web client to manage vSphere from anywhere.
  • Networking supports Netflow, SPAN support and LLDP
  • ESXi now has a built-in firewall
  • VMFS version increased to 5.0 (online non-disruptive update from prior versions)
  • VMFS support for datastores up to 64TB without using extents
  • VAAI v2
  • Software FCoE initiator
  • vMotion support for higher latency links (up to 10ms)
  • Dropped the “Advanced” licensing SKU
  • Licensing is now based on CPU sockets AND vRAM (see my licensing post here). No more core/memory limitations.
  • vCenter Heartbeat 6.4 supports SQL Server 2008 R2 and vCenter plug-in for monitoring.
  • New vSphere Storage Appliance

Nearly each feature could have a dedicated blog post about it, so this is just a small snapshot of some features. Other products like SRM and vShield have also undergone major updates. Stay tuned for a lot more post about new features.

Get Your Nerd On T-shirt now!

A friend of mine, Chris McCain, has some cool t-shirts for sale on his site, Get Your Nerd On.

Check ’em out and get one today!

© 2017 - Sitemap