DISA VMware vSphere 5 STIG Released

Hot off the press DISA has released the VMware vSphere 5 STIG, which includes vCenter, ESXi and VM components. For those of you familiar with U.S. Government IT systems, you’ve probably heard of the DISA STIGs. STIGs are Security Technical Implementation Guides, which set the baseline for a variety of operating systems, network devices, and applications. It’s basically a long checklist of hardening settings that the product should comply with. Usually a fully STIG’d system won’t be very usable, so some “findings” are generally normal for many environments. Testing of a post-STIG’d system is extremely critical.

The process to create and approve the STIGs is quite lengthy, so their release for a product will generally lag the GA/RTM by months if not years. So it’s not surprising that the vSphere 5 STIGs are just now coming out, nearly two years after vSphere 5.0 hit the streets.

The STIGs for vSphere are publically available, so even if you aren’t supporting US Government systems, they are definitely worth looking at. There are a lot of settings in there that you may not find elsewhere. You can find the download page here. When you go to that page you will see the following downloads.

8-11-2013 3-36-35 PM

What you really want are the ZIP files. Once you open the ZIP file, open the embedded ZIP file. Once in there you will see a XML file, XLS style sheet, and possibly an image or two. Extract all the files to a local directory then open the XML file with a browser.

8-11-2013 3-50-20 PM

Once opened in your browser you will see a very long list of manual checks that you need to perform. An example is shown below. Many steps provide details where in the GUI or command line that you need to look for or configure the hardening setting. There’s no automation tool that I’m aware of, so for a big environment this would be extremely time consuming.

8-11-2013 3-56-07 PM

VMware has their own very good set of hardening guides that you can find here. They are much more timely with their releases, so they have vSphere 5.1 guides available. To help automate the VMware-based checks you can use this great script by William Lam, here.

Happy STIG’ing!

New VMware Cisco UCS Drivers for vSphere 5.x

new VMware Cisco UCS Drivers for vSphere 5.x are now out for the fnic and enic drivers. The fnic and enic drivers are for the Cisco VIC cards, which come in a few different flavors. These are the most common FCoE cards used in the UCS blades. They are now at version 1.5.0.20 and 2.1.2.38, respectively. The enic has the following enhancements:

  •     Improved RX checksum offload support
  •     Fixed PSOD in enic_reset in UPT mode
  •     In UPT mode, allow updating the provising info of a Virtual Interface.
  •     Add support for Cisco VIC 1240
  •     Improved WQ/RQ error recovery in UPT mode
The fnic has the following enhancements:
New Features:
  • FIP VLAN Discovery support
  • FC Event Tracing

Bugs Fixed (since fnic version 1.5.0.8):

  • Corrected failure to login to SAN fabric which could randomly occur when FIP is enabled and the adapter is connected to an NPV-enabled switch advertising multiple FCFs (CSCtr73717, CSCtk14373).
  • Serialized interaction with adapter firmware to correct occasional fnic and/or device offline behavior when the connected Veth port on the switch is shut (disabled).  This was a general issue that could also occur in other scenarios where the driver was required to issue a reset command to the adapter firmware (CSCtz63473, CSCty31268).
New Hardware Supported:
  • Cisco UCS Virtual Interface Card 1240

The Cisco interoperability matrix that shows these drivers are validated with UCS 2.1(1) and 2.1(1a) firmware. You can find the whole matrix here. To download the new drivers, go to:

My VMware > Product & Downloads > All Downloads > VMware vSphere and click on the Drivers and Tools tab.

Cisco is very slow to release customized ESXi Installation media, so you can check out my post here to roll your own ESXi 5.0 updated media. This would allow you to build a ESXi 5.0 U2 media with the latest drivers baked in. As of this writing, Cisco has only released 5.0 U1 custom ISO media.

You can push the enic and fnic driver updates via VUM. To do that you will need to unzip the parent archive (the one you download) then import the “offline_bundle” ZIP into VUM. You can then create a custom baseline in VUM to push the enic/fnic drivers and validate compliance.

VMware Whitepaper on CPU Scheduling Performance

In a virtual environment CPU scheduling is extremely important. You very likely will have more vCPUs than pCores, so the hypervisor has to play some “tricks” to fairly schedule VMs among the finite hardware resources. With each version of vSphere VMware has tuned and optimized CPU scheduler performance.

VMware design goals for CPU scheduling are fairness, throughput, responsiveness and scalability. So what has VMware done under the covers in vSphere 5.1 to be even more efficient? Glad you asked! A few days ago VMware released a brand new whitepaper titled The CPU Scheduler in VMware vSphere 5.1. In it, you get to read about:

  • Removing zombies
  • Proportional shared-based algorithms
  • Relaxed co-scheduling
  • Load balancing
  • Hyper-threading Policy
  • NUMA Scheduling Policy
  • Wide Virtual Machines
  • vNUMA Best Practices
  • Scheduler experiments and resulting data

In conclusion the whitepaper states:

The ESXi CPU scheduler achieves fair distribution of compute resources among many virtual machines without compromising system throughput and responsiveness. Relaxed co-scheduling is a salient feature that enables both correct and efficient execution of guest instructions with low overhead. The ESXi CPU scheduler is highly scalable and supports very big systems and wide virtual machine.

vSphere 5.1 optimizes the load-balancing algorithm introduced in 5.0. It results in noticeable reductions in CPU scheduling overhead. A policy change on hyper-threaded systems enables out-of-the-box performance of 5.1 exceeding that of a tuned version of vSphere 4.1. No special tuning is required to achieve the best performance for most common application workloads.

If you are uber geeky, then check it out here.

HP Releases vSphere 5.0 U2 Custom ISO

In case you missed my previous blog post, yesterday VMware released vSphere 5.0 Update 2, which fixes a plethora of bugs and plugs security holes. You can find my host here, with links to the release notes for all the gory details.

Right on the heels of VMware releasing 5.0 Update 2, HP was the first vendor I’m aware of to release an updated custom installation ISO for ESXi 5.0 Update 2. You can download the ISO here. When performing any new ESXi 5.0 installs on HP hardware, I would urge the use of the U2 CD, so you have the latest drivers and security fixes. Also remember to update your HP server firmware with the latest HP Service Pack for ProLiant you can find here.

VMware releases vSphere 5.0 Update 2

Yesterday VMware released vSphere 5.0 Update 2. You can find the full ESXi release notes here. vCenter 5.0 Update 2 release notes are here. Highlights of Update 2 include:

  • Full support for installing vCenter on Windows Server 2012
  • Full customization support for Windows 8 and Windows Server 2012
  • Support for Solaris 11, Solaris 11.1, Mac OSX Server Lion 10.7.5
  • Lots and lots of bug fixes

Personally I think update 2 is a huge step forward, as Windows Server 2012 and Windows 8 is now a first class citizen. Yippee! Too bad 5.1 doesn’t support it yet. For those of you still burdened with the vTax (essentials edition) update 2 removes the hard enforcement of the 192GB limit. VMs will now power on if you exceed your licensed limit.

I would suggest you review the full release notes and bug fixes to see if any problems you have with 5.0 Update 1 have been addressed. Update 2 also includes security fixes, so even if a particular non-security bug doesn’t affect you, to keep your environment secure you should upgrade to update 2.

Update: VUM cannot be installed on Windows Server 2012, since VUM requires a 32-bit OBDC/DSN. Windows Server 2012 has removed support for 16-bit and 32-bit DSNs. Unfortunately VUM in 5.1 also uses a 32-bit DSN, so VMware needs to get with the program and use 64-bit DSNs if they don’t want to be trapped in the stone age.

New VMware ESXi 5.0 Patch Released

A few days ago VMware released a new patch bundle for ESXi 5.0. The new build number is 821926. You can find the full details here. As always, you can download the patches from their download portal here. The fixes are of severity “Important” and require VM shutdown and Host reboot.

Fixed in this release is:

  • PR667599: Certain vSphere Client initiated storage operations such as Storage Rescan, Add Storage, or Detach LUN might take a long time to complete or fail due to a timeout when ESXi host has access to VMFS snapshot volumes (replica LUNs). This patch addresses the issue by reducing the time taken to complete these operations.

  • PR787454: After performing vMotion operations on virtual machines, a NUMA imbalance might occur with all the virtual machines being assigned the same home node on the destination server.
 
  • PR817140: On IBM BladeCenter HX5 UEFI servers, a message similar to the following might be displayed in the vSphere Client when you attempt to configure the power policy of an ESXi host:
Technology: Not Available
Active Policy: Not Supported.
  • PR886363: When a virtual machine is cloned through vCenter Server, the VM Generation ID of the cloned Windows 8 and Windows Server 2012 virtual machines is not updated.
 
Note:To resolve this issue, you can edit the .vmx file of the cloned virtual machine to set the vm.genid to 0 or remove the line vm.genid from the .vmx file.
 
 
  • PR887134: Timer stops in FreeBSD 8.x and 9.x as virtual hardware HPET main counter register fails to update due to comparison failure between signed and unsigned integer values.
 
 
  • PR896193: The VMGencounter generation ID is increased to 128-bit value to support specification changes from MSFT and also ACPI notification support is added.
  • PR903619: The timers resulting from I/O operations are allocated from VMKernel main heap. With increased I/O activity, the number of timers increase proportionally leading to depletion of main heap and eventually resulting in a purple diagnostic screen.
  • PR908776: A virtual machine might fail when it encounters error conditions during snapshot operations. Messages similar to the following are logged in the vmware.log file:

2012-06-15T13:00:14.449Z| vcpu-0| SnapshotVMXConsolidateOnlineCB: nextState = 6 uid 0
2012-06-15T13:00:14.449Z| vcpu-0| Vix: [1376329 mainDispatch.c:4084]: VMAutomation_ReportPowerOpFinished: statevar=3, newAppState=1881, success=1 additionalError=0
2012-06-15T13:00:14.449Z| vcpu-0| Vix: [1376329 vigorCommands.c:577]: VigorSnapshotManagerConsolidateCallback: snapshotErr = Change tracking target file already exists (5:F3C)
2012-06-15T13:00:14.449Z[+0.000]| vcpu-0| Caught signal 11 -- tid 1376329

or

2012-08-07T21:10:13.184Z| vcpu-0| SnapshotVMXTakeSnapshotComplete done with snapshot 'Snapshot1': 0
2012-08-07T21:10:13.184Z| vcpu-0| SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Could not open/create change tracking file (5).
2012-08-07T21:10:13.184Z[+0.000]| vcpu-0| Caught signal 11 -- tid 5678212

  • R881444: Large number of UDP packets are dropped when you use the VMXNET3 adapter in a Linux guest OS on an ESXi 5.0 host.

  • PR901683: After creating Microsoft Windows Server 2012 (64-bit) virtual machines with the create VM UI wizard and then installing VMware Tools, the guest OS name changes from Microsoft Windows Server 2012 (64-bit) to Microsoft Windows 8 (64-bit).
 
  • This patch updates the misc-drivers VIB to resolve an issue where the ESXi hosts might encounter a purple diagnostic screen with one of the following messages due to a deadlock during SCSI command completion:
 

 
 
PCPU X locked up. Failed to ack TLB invalidate (total of 1 locked up, PCPU(s): X).
Saved backtrace from: pcpu 4 TLB NMI
0x41226eb87aa0:[0x418003a7669d]RefCountIncWait@vmkernel#nover+0x28 stack: 0x73cd
0x41226eb87b00:[0x418003aea71e]Worldlet_WorldletAffinitySet@vmkernel#nover+0xa9 stack: 0xc37c0
0x41226eb87b30:[0x418003aeaa1b]WorldletAffinityTrackerUpdate@vmkernel#nover+0x42 stack: 0x41226eb87
0x41226eb87b60:[0x418003ae90ed]vmk_WorldletAffinityTrackerCheck@vmkernel#nover+0xf8 stack: 0x1ef4f9
0x41226eb87cc0:[0x418003ed3f6e]SCSILinuxWorldletFn@com.vmware.driverAPI#9.2+0x7e1 stack: 0x0
0x41226eb87d70:[0x418003aed17e]WorldletProcessQueue@vmkernel#nover+0x3c5 stack: 0x70000000c
0x41226eb87db0:[0x418003aed689]WorldletBHHandler@vmkernel#nover+0x60 stack: 0x41226eb65000
0x41226eb87e10:[0x418003a1824c]BHCallHandlers@vmkernel#nover+0xbb stack: 0x7004122000683c7
0x41226eb87e50:[0x418003a1873b]BH_Check@vmkernel#nover+0xde stack: 0x410001772000
 
or

PCPU Y locked up. Failed to ack TLB invalidate (total of 1 locked up, PCPU(s): Y).

Saved backtrace from: pcpu 13 TLB NMI
0x412240ac7bc8:[0x41800bcbe4ba]__raw_spin_failed@com.vmware.driverAPI#9.2+0x1 stack: 0x4100341fc180
...
0x412240ac7c78:[0x41800bca0c90]Linux_IRQHandler@com.vmware.driverAPI#9.2+0x2b stack: 0x41800bcc563e
0x412240ac7d08:[0x41800b841ba5]IDTDoInterrupt@vmkernel#nover+0x2fc stack: 0x410009a06018
0x412240ac7d48:[0x41800b841e23]IDT_HandleInterrupt@vmkernel#nover+0x72 stack: 0x41000c73e450
0x412240ac7d68:[0x41800b84274d]IDT_IntrHandler@vmkernel#nover+0xa4 stack: 0x412240ac7e80
0x412240ac7d78:[0x41800b8f1047]gate_entry@vmkernel#nover+0x46 stack: 0x4018
  
  • This patch updates the esx-base VIB to resolve an issue that occurs when you enable BPDU (Bridge Protocol Data Unit) guard on the physical switch port. The BPDU frames sent from the bridged virtual NIC cause the physical uplink to get disabled and as a result, the uplink goes down.
 

Cisco UCS ESXi 5 U1 Custom ISO Image Download

A few months ago I blogged about how to create your own custom Cisco UCS ESXi 5 U1 installation media, with the latest security patches and drivers. There was also an ISO image on VMware’s web site, which magically disappeared for a while. Well, now it appears to be back. There is a Cisco Custom Image for ESXi 5.0 U1 GA Install CD dated 8/28/2012. You can download it here, under OEM Customized Installer CDs. At as of April 2013 Cisco has not released an ESXi 5.0 U2 custom install ISO. So use the 5.0 U1 ISO for the time being.

Cisco UCS ESXi 5 ISO

Unfortunately the Cisco release notes PDF hasn’t been updated since April 2012, so without deconstructing the ISO image I have no idea what is updated. But as always, I would use the latest ISO when deploying new UCS servers.

How much does VMware EVC mode matter? Which one?

When configuring a cluster in vSphere, one of the options that you should always configure is the VMware EVC mode. What is VMware EVC mode? It’s the Enhanced vMotion Compatibility setting, which enables vMotion across multiple generations of processors. New processors often come with new instruction sets, and if the VM is awe of these new instructions but it gets vMotioned to a processor without them, you would get unpredictable results. As a result, vSphere detects this condition and will prohibit a vMotion attempt when CPUs are not compatible. Below is a screenshot from vSphere 5.0 and the possible EVC compatibility modes for Intel hosts.

VMware EVC mode

But one question you may have, is which VMware EVC mode should I use and does using a “lower” setting impact application performance? To answer this question VMware has written a whitepaper titled Impact of Enhanced vMotion Compatiblity on Application Performance. For the complete story, I strongly suggest that you read the whitepaper. But if you are short on time, I’ll summarize the results for you.

For the vast majority of business applications the EVC mode has little to no impact on application performance. However, there are a couple of notable exceptions. Starting with the Intel Westmere processors, there are six new instructions for AES encryption which drastically increase encryption performance. This can be helpful for SSL encrypted web sites, disk encryption or anything else using AES. Below is a chart from the VMware whitepaper showing the dramatic AES encryption performance with the Westmere platform.

A less dramatic example is with multimedia content, and the SSE4.1 instruction set. Here, there was a 4% improvement in encoding rate when using an EVC mode that exposed SSE4.1 to the VM.

The other workloads in the whitepaper showed practically no difference in performance across all VMware EVC modes. Do take note that the whitepaper did not test Sandy Bridge EVC mode, which is for the newest Intel processors. Sandy Bridge EVC mode is only supported in vCenter 5.0 and later (not 4.1 or earlier).

One common configuration mistake I’ve seen at work is setting up clusters without any VMware EVC mode enabled. While this may seem like a fine idea when you have the same processors in your servers so technically vMotion will work just fine, it is short sighted. At some point you will probably introduce a new server to the cluster which has a newer processor. Now you have a problem if EVC mode is not enabled. Changing the EVC mode requires the power cycling of VMs, so now you will experience some down time when you turn EVC mode on.

Bottom line is when you build a new vSphere cluster, use the latest EVC mode that the processor supports, and that will cover the CPU models that may get introduced into your cluster. If you go on a Sandy Bridge buying spree, use the Sandy Bridge EVC mode, then find you want to add an older generation server to the cluster you now have to plan downtime for your VMs to downgrade the EVC mode.

Even for Sandy Bridge based clusters, unless you have an application that you know will use the new instructions (AVX and XSAVE), I would suggest you use Westmere EVC mode. That will preserve the AESNI/SSE4.1 instruction sets, yet allow a broad range of prior generation servers to join the cluster down the road, with no downtime.

If you aren’t sure what EVC mode applies to your processor, VMware has an outstanding KB article (KB1003212) that you should review. It breaks down what processor series (e.g. Intel E7-88xx) supports which EVC mode.

Unattended vSphere Utility Installs

Sometimes you may want to install the various vSphere utilities (PowerCLI, vSphere CLI, vSphere Client, and VUM PowerCLI) to non-default directories, or use a silent/unattended installation to automate the process.

Below are four batch files that you can run which will install the respective tool to the custom installation directory specified. What’s cool about the batch file is you can double click on the batch file from any path and it will CD to the location of the installer and run it. If you are using Windows Server 2008/R2 with UAC, you will be prompted to elevate to do the installation, but otherwise there is no interaction required.

The VUM PowerCLI extensions can’t be configured for a custom installation directory, so it will just silently install to the default location. You could of course also combine all of the commands and install all of the tools with a single click, silently.

I also included a silent installation of OpenSSL, which can be handy for creating ESXi, vCenter and VUM certificates.
—-
cd /d %0..
start /wait VMware-PowerCLI-5.0.0-435426.exe /q /s /w /L1033 /V” /qr INSTALLDIR=”D:Program Files (x86)VMwareInfrastructurevSphere PowerCLI”
—-
cd /d %0..
start /wait VMware-viclient-all-5.0.0-455964.exe /q /s /w /L1033 /v” /qr INSTALLDIR=”D:Program Files (x86)VMwareInfrastructure”
—-
cd /d %0..
start /wait VMware-VSphere-CLI-5.0.0-422456.exe /s /v”/qb INSTALLDIR=”D:Program Files (x86)VMwareVmware vSphere CLI\””
—-
cd /d %0..
start /wait VMware-UpdateManager-Pscli-5.0.0-432001 /q /s /w /L1033 /V” /qr 
—-
cd /d %0..
Vcredist_x64.exe /q /norestart
Win64OpenSSL-1_0_0d.exe /verysilent /sp-

vSphere 5 VDI Licensing Redux

Among the licensing kerfuffle surrounding vSphere 5.0, VDI users may have overlooked some interesting information that VMware posted about VDI and vSphere 5.0 today. Myself and other bloggers like Brian Madden have done some analysis on what vSphere 5.0 means for VDI, mostly for non-VMware products such as XenDesktop.

However, VMware’s blog post from today contains some very interesting information that I had not seen before. But before we get to that, let me quickly recap the two primary options for VDI on vSphere 5.0. (For more details see my blog post here.)

First, you can buy/use regular vSphere licenses and work within the vRAM entitlement limits. Depending on the number and size of VDI VMs, this may or may not be the best deal. Second, and new to vSphere 5.0 is the vSphere Desktop license which is sold in packs of 100 VMs for $6500. This removes the vRAM entitlement limit, but imposes other limits such as not running server OS VMs on the same hosts as VDI VMs. But overall, this is a better ROI as the per-VM costs are generally lower.

Now here’s the new information that I just learned about. According to a VMware FAQ in the blog today “Customers who purchased licenses for vSphere 4.x (or previous versions) prior to September 30, 2011 to host desktop virtualization, and hold current SnS agreements, may upgrade to vSphere 5.0 while retaining access to unlimited vRAM entitlement.”

Whoa…stop the train! Did I read that right? You can “violate” the vRAM limitations if you purchase vSphere 4.x licenses before Sept 30, 2011 for VDI? Yes, but what’s the catch? Well there are a couple, but they aren’t unreasonable. VMware states you must use a separate vCenter instance that is dedicated to VDI in order to re-purpose your vSphere 4.0 licenses for VDI and remove the vRAM caps. VMware states this is required, not optional. You also can NOT run general purpose (non-VDI) server VMs on the same hosts, but you could run VDI broker/monitoring VMs on the same hosts.

Now all the bloggers that did elaborate calculations for VDI can toss much of that work to the wind, recommend people deploy a dedicated vCenter server to manage VDI-only hosts, and be done with it. Companies just getting into VDI probably should go the vSphere Desktop SKU route, but it’s nice to know existing VDI customers aren’t left in the cold. You could need to pony up for an additional vCenter license, depending on your existing topology.

With XenDesktop in mind, this also makes some sense. Why? Who knows when Citrix will officially support vSphere 5.0 for XenDesktop. I would guess it will be several months after vSphere 5.0 code hits the streets. So you can dedicate a vCenter 4.x instance to XenDesktop, then migrate your other production servers to vSphere 5.0 on your schedule without worrying about XenDesktop impacts.

Along with the new entitlement increases announced today, I think the grandfathering of VDI only hosts into a vRAM entitlement free environment is a great gesture on VMware’s part. Thank you!

P.S. It’s not entirely clear to me if one leverages the vSphere Desktop licenses whether that also requires a separate vCenter instance or not. I suspect it does, unless there’s a way to tell vCenter a host is only for VDI usage.

© 2017 - Sitemap