vCenter 5.1 U1 Installation: Part 12 (VUM SSL Certificate)

Welcome to the vSphere 5.1 Update 1 VUM SSL certificate replacement procedures. In Part 11 we installed VMware vCenter Update Manager (VUM) 5.1 Update 1. Recently VMware released the vCenter Certificate automation tool, which helps lessen the pain associated with replacing the self-signed certificates with trusted certificates. I recommend you use that tool, instead of pre-staging certificates or replacing them manually.

However, in v1.0 of the VMware tool it does NOT support updating the VUM SSL certificate if you’ve registered VUM to vCenter using the vCenter FQDN. Since I would consider it a best practice to use the vCenter FQDN (vice IP address), we need to manually replace the VUM certificate until an update to the tool is released. I recommend replacing the VUM certificate only after you’ve gone through all 15 parts of this install series, and run the VMware vCenter certificate automation tool. If you have completed all those steps, then proceed with this article. If not, then jump ahead to Part 13 and come back here later.

If you want to refer to the official VMware article for replacing the VUM SSL certificate, you can find the procedure here. Thankfully it’s not difficult, so you shouldn’t have any problems.

Before we get started, listed below are the other related articles in this series:

Part 1 (SSO Service)
Part 2 (Create vCenter SSL Certificates)
Part 3 (Install vCenter SSO SSL Certificate)
Part 4 (Install Inventory Service)
Part 5 (Install Inventory Service SSL Certificate)
Part 6 (Create vCenter and VUM Databases)
Part 7 (Install vCenter Server)
Part 8 (Install Web Client)
Part 9 (Optional SSO Configuration)
Part 10 (Create VUM DSN)
Part 11 (Install VUM)
Part 13 (VUM Configuration)
Part 14 (Web Client and Log Browser SSL)
Part 15 (ESXi Host SSL Certificate)

Updating VUM SSL Certificate

1. Backup all the files in the directory below. Copy the rui.key, rui.crt and rui.pfx files from your D:\Certs\VUM directory and replace the files in this directory:

C:\Program Files (x86)\VMware\Infrastructure\Update Manager\SSL

2. Stop the VMware vSphere Update Manger Service.

3. In the C:\Program Files (x86)\VMware\Infrastructure\Update Manager directory launch the VMwareUpdateManagerUtility.exe application.

4. Login to the vCenter server using proper credentials.

5. Click on the SSL Certificate option on the left side then check the box on the right side and click Apply.

6. If all goes well you should see the window below. Restart the service as directed.

In Part 13 we perform basic VUM configuration to add the HP patch depot and attach built-in baselines for VMs and ESXi hosts.

Automate VMware VMX Security Lockdowns

When building vSphere VM templates best practices would recommend that a number of security lockdowns be incorporated into the template. There are a variety of sources for recommended lockdowns, such as the VMware vSphere 4.1 Hardening Guide. But what if you already have VMs in production that you need to lock down, or want a simple way to configure your VM template settings?

Using some PowerCLI examples I modified them and the result is the script below. The script is called with a single argument, which can be the name of a VM or a wildcard so you can do mass changes. As always, TEST, TEST, TEST! Before you lock down all the settings below, make sure you understand what they do and determine if you really want to disable the feature.

This script can be very handy for XenDesktop 5.0 deployments, as their MCS engine does not properly copy custom VMX settings from the template, so you are left with unsecured VMs. Use the wildcard feature to hit all of the VMs. Also note that many of the settings require the VM to be power cycled, not just rebooted, to read the new values.

Before you run the script you will of course need to use the connect-viserver command to establish a secure connection to vCenter or an ESX(i) host. After the connection is established you can then run the script and monitor the progress in the vCenter recent tasks pane.

# Configure client VM VMX security settings.
# Version 1.0, August 14, 2011
# Argument can be a single VM or a wildcard

$ExtraOptions = @{
 “isolation.device.connectable.disable”=”true”;
 “isolation.device.edit.disable”=”true”;
 “isolation.tools.copy.disable”=”true”;
 “isolation.tools.paste.disable”=”true”;
 “isolation.tools.setGUIOptions.disable”=”true”;
 “Isolation.tools.Setinfo.disable”=”true”;
 “Isolation.tools.connectable.disable”=”true”;
 “isolation.tools.diskShrink.disable”=”true”
 “isolation.tools.diskWiper.disable”=”true”;
 “isolation.tools.hgfs.disable”=”true”;
 “isolation.tools.commandDone.disable”=”true”;
 “isolation.tools.getCreds.disable”=”true”;
 “isolation.tools.guestCopyPasteVersionSet.disable”=”true”;
 “isolation.tools.guestDnDVersionSet.disable”=”true”;
 “isolation.tools.guestlibGuestInfo.disable”=”true”;
 “isolation.tools.guestlibGetInfoDisable.disable”=”true”;
 “isolation.tools.haltReboot.disable”=”true”;
 “isolation.tools.haltRebootStatus.disable”=”true”;
 “isolation.tools.hgfsServerSet.disable”=”true”;
 “isolation.tools.imgCust.disable”=”true”;
 “isolation.tools.memSchedFakeSampleStats.disable”=”true”;
 “isolation.tools.runProgramDone.disable”=”true”;
 “isolation.tools.StateLoggerControl.disable”=”true”;
 “isolation.tools.unifiedLoop.disable”=”true”;
 “isolation.tools.upgraderParameters.disable”=”true”;
 “isolation.tools.vixMessages.disable”=”true”;
 “isolation.tools.vmxCopyPasteVersionGet.disable”=”true”;
 “isolation.tools.vmxDnDVersionGet.disable”=”true”;
 “isolation.tools.setOption.disable”=”true”;
 “isolation.tools.log.disable”=”true”;
 “log.rotateSize”=”100000”;
 “log.keepOld”=”10”;
 “Tools.setinfo.sizelimit”=”1048576”;
 “tools.synchronize.restore”=”false”;
 “time.synchronize.resume.disk”=”false”;
 “time.synchronize.continue”=”false”;
 “time.synchronize.shrink”=”false”;
 “time.synchronize.tools.startup”=”false”;
 “vmci0.unrestricted”=”false”;
 “guest.command.enable”=”false”;
 “tools.guestlib.enableHostInfo”=”false”;
 “isolation.tools.dnd.disable”=”true”;
 “RemoteDisplay.maxConnections”=”1”;
 “Guest.command.enabled”=”false”;
 “devices.hotplug”=”false”;
 “vmxnet.noOprom”=”true”
}
$vmConfigSpec = New-Object VMware.Vim.VirtualMachineConfigSpec
Foreach ($Option in $ExtraOptions.GetEnumerator()) {
    $OptionValue = New-Object VMware.Vim.optionvalue
    $OptionValue.Key = $Option.Key
    $OptionValue.Value = $Option.Value
    $vmConfigSpec.extraconfig += $OptionValue
}

# Get all VMs per the argument

$VMs = get-VM $args[0] | get-view

foreach($vm in $vms){
    $vm.ReconfigVM($vmConfigSpec)
}

Free vSphere Compliance Checker

Yippee..a free tool from VMware! This nice little tool runs compliance scans against vSphere hosts and compares the results to the VMware Hardening Guidelines. Almost a year ago I wrote a short blog announcing their hardening guide here. Since then, VMware released a hardening guide for vSphere 4.1, which you can find here.

This tool beats trying to do manual scans to see how compliant your environment is. The free tool only scans five hosts at once, and I can’t find a way to display which VMs are not in compliance. It just gives the server an overall score for each items. So it has very limited utility, IMHO. If you want more detailed information, then you step up to their paid product, vCenter Configuration Manager or a third-party tool.

You can download the free tool here. Be aware that you need Java installed on the computer you run the scan from, and on 64-bit systems it may default to the wrong Java directory path. Scanning my lab host took less than a minute, and came up with several non-compliant settings, most I was aware of and accepted the risk since it’s just my home ESX server.

VMware VUM 4.1 U1 SSL Certificate Replacement

One of the continuing pain points with VMware vSphere is the unnecessarily complicated procedure to install trusted SSL certificates in ESXi, vCenter and VUM. Up until 4.1 Update 1 (released 2/10/11), VMware had no public procedures to update the VUM SSL certificate, over 1.5 years after vSphere 4.0 hit the streets. Plus I’ve found that even the published procedures for ESX(i) and vCenter were convoluted and incomplete.

So over the last couple of years I’ve written several blogs about how to replace your ESXi, vCenter and VUM certificates. VMware made a little progress with VUM 4.1 Update 1, in that they now have a GUI utility that performs the behind-the-scenes reconfiguration of VUM to use a new SSL certificate. This new tool is called VMware Update Manager utility and does more than just update your VUM SSL certificates. You are still left with a painful process for ESXi and vCenter, so maybe in vSphere 5.0 VMware will wake up and provide a more streamlined procedure.

Even with the new tool in 4.1 U1, I found the associated KB article less than helpful and even tells you to leverage openssl on an ESX (not ESXi) host to generate new self-signed certificates. The second half of their article has instructions for using a trusted commercial CA certificate, but they still have you leveraging ESX to generate the certificate requests. This boggles my mind for several reasons:

1) ESX 4.1 is the last release and is a dying code branch. VMware has stated ESXi is the future and the only option in vSphere 5.0.

2) OpenSSL is not included in ESXi so you can’t follow the KB article if you only run ESXi . Why publish a KB article that doesn’t apply to organizations that only use ESXi?

3) OpenSSL is open source and widely available for free for many platforms including Windows. vCenter and VUM only run on Windows, so it makes a lot more sense to have customers download Windows OpenSSL and generate the certificates on a Windows computer.

4) Even if you have ESX, VMware always lags in incorporating the latest version of OpenSSL into ESX, so you could be using a version with known vulnerabilities.

I’m not trying to bash VMware, but come on guys, please get with the program. As a testiment to the SSL problems VMware has not addressed, my SSL blog posts get alot of hits. Until VMware “gets it right” I’ll continue to help the community at large. So to that end, let’s get on with how to update VUM SSL certificates in VUM 4.1 U1.

1. Download OpenSSL Windows binaries here. I recommend the full v1.0.0c package. Install OpenSSL using all default values on any Windows computer. I put OpenSSL on my vCenter server since I need it for ESXi and vCenter SSL certificate generation.

2. Generate a 2048-bit RSA private key (you could use 1024 bit as well, but I like stronger keys):
openssl genrsa 2048 > rui.key

3. Create a certificate request based on the previously generated private key:
openssl req -new -key rui.key > rui.csr

For the certificate request parameters (in green) use the values appropriate for your organization. The critical parameter, the common name (in red), should be the FQDN of your VUM server. Do not use a challenge password.

4. At this point you have a valid certificate request and you can submit it to a commercial CA, or your internal trusted CA. For the purposes of this article I will leverage a 2008 R2 Microsoft CA, so some steps may vary if you use a commercial cert.

5. Use NotePad and copy the contents of rui.csr to the clipboard.

6. Navigate to your Microsoft CA, click on Request a certificate, click advanced certificate request, click Submit a certificate request by using a base-64-encoded CMC….

7. On the Saved Request screen paste the contents of the clipboard, and change the certificate template to Web Server (or your organization’s web server template name).

8. Submit the certificate request and download it as base-64 encoded WITHOUT the certificate chain, and save it with a filename of rui.crt.

9. Type the following command (and use a blank password when prompted):
openssl pkcs12 -export -in rui.crt -inkey rui.key -name rui -out rui.pfx

10. Stop the VMware vCenter Update Manager service.

11. Backup the existing VUM certificates located your VUM directory, by default it’s :
C:Program Files (x86)VMwareInfrastructureUpdate ManagerSSL.

12. Copy your new rui.crt, rui.key and rui.pfx files to the SSL directory above, replacing the existing files.

13. Navigate to C:Program Files (x86)VMwareInfrastructureUpdate Manager and launch VMwareUpdateMangerUtility.exe. Login with your vCenter administrator credentials.

14. Click on SSL certificate then check the box under the instructions and finally click Apply.

15. Restart the VMware vCenter Update Manager Service and pray it starts.
16. If you’ve left any of your certificate files laying around the file system, except in the VUM SSL directory, you should back them up to a secure location then delete them. You need to protect the private keys, so don’t leave them laying around just anywhere.
17. Launch the vSphere client and connect to vCenter. Verify that the VUM tab appears and that you can access VUM without any errors. It would also be smart to check the vCenter Service Status from the vCenter home page to ensure everything looks healthy.

Authentication Denied: Unable to logon to an ESXi 4.1 console

Over the last couple of days I’ve been performing ESXi 4.0 to 4.1 build 320137 upgrades. During the upgrade process today one of my servers had a hiccup and the web services was not responding. After a couple of reboots and non-responsive web services I iLO’d in (it’s an HP server) so I could get to the ESXi DCUI (direct console user interface), AKA the yellow screen. To my shock when I pressed F2 I got:

Authentication Denied: Direct console access has been disabled by the administrator for contoso.net.

At first I thought OK, maybe someone enabled lockdown mode and I didn’t know it. Checked a few things, nope, no lockdown mode. After more poking and prodding, I found the root cause of the problem. But it’s a mystery to me why this is occurring. The only consistent theme is that the server was on ESXi 4.0 and they were upgraded to ESXi 4.1 build 320137.

So what was the problem? New to ESXi 4.1 is the Security Profile configuration screen. Here you can stop/start several low-level system services. On 25% of my upgraded boxes the “Direct Console UI” service was in the stopped state as shown below.

The solution is to reconfigure the service to Start and Stop with host, which is the ESXi 4.1 default configuration. After I started the service, viola, DCUI access was restored!

Since this happened on several boxes, but not all, I’ll chalk it up to another VMware bug. So in my upgrade procedures I’m adding a check to verify the service status before we bless an upgrade as being complete.

Finally…strong ESX 4.1 root passwords. SHA512 baby!

Historically VMware has not used the strongest hashing algorithms to store root passwords on ESXi or ESX hosts. And to make matters worse, ESX/i 4.1 had a major security hole that was open for over four months, which you can read about here. The short story is that ROOT passwords in ESX/i 4.1 were only authenticated up to 8 characters. The screw up on VMware’s part was only using DES (not even 3DES) for the password encryption. DES is a joke, and even 3DES is not considered secure. One workaround for this major hole was to use MD5 hashing, but even that is not considered secure.

A couple of days ago VMware pushed a KB article how to increase the password encryption strength by using SHA512. SHA512 is considered secure and is very well respected. So I applaud VMware in publishing an article on how to enable this feature. I am still shocked it took VMware four months to publish a patch to plug the 8 character password hole.

I can only hope in 4.1 U1 and future releases that SHA512 is used by default. Having to hack system files to increase security is not my idea of a fun time.

3PAR vSphere VAAI "XCOPY" Test Results: More efficient but not faster

In my previous blog I discussed how the VMware 4.1 VAAI ‘write same’ implementation in a 3PAR T400 showed a dramatic 20x increase in performance, creating an eager zeroed thick VMDK at 10GB/sec (yes, GigaBYTES a second). The other major SCSI primitive that VAAI 4.1 leverages is XCOPY (SCSI opcode 0x83). What this does is basically offload the copy process of a VMDK to the array, so all of the data does not need to traverse your SAN or bog down your ESX host.

In this test I used the same configuration as described in my previous blog entry. I decided to perform a storage vMotion of a large VM. This VM had three VMDKs attached, to simulate real world usage. The first VMDK was 60GB and had about 5GB of operating system data on it. The next two VMDKs were eager zerod thick disks, 70GB and 240GB, and had no user data written to them. Total VMDK size is 370GB. I initiated a storage vMotion process from vCenter 4.1 to start the copy process.

“XCOPY” without VAAI:
Host CPU Utilization: ~3916 MHz
Read/write latency: 3-4ms
3PAR host facing port aggregrate throughput: 616MB/sec
3PAR back-end disk port aggregrate throughput: ~0MB/sec
Time to complete: 20 minutes

These results are very reasonable, and quite expected. Since VAAI was not used, the ESXi host has to read 370GB of data, then turn it right around and write 370GB of data to the disk. So in reality over 740GB of data traversed the SAN during the 20 minute storage vMotion process. Since the VMDKs only contained 1% written data, back-end disk throughput was nearly zero because of the ASIC zero detection feature. If the VMDKs were fully populated then the back-end ports would be going crazy and the copy would be slower since all I/Os would be hitting physical disks.

“XCOPY” with VAAI:
Host CPU Utilization: ~3674 MHz
Read/write latency: 3-4ms
3PAR host facing port aggregrate throughput: ~0MB/sec
3PAR back-end disk port aggregrate throughput: ~0MB/sec
Time to complete: 20 minutes

Now I’m pretty surprised at these results, and not in a positive fashion. First, it’s good to see nearly zero disk I/O on the host facing ports and the back-end ports. This means in fact VAAI commands were being used, and that the VMDKs were nearly all zeros. However what has me very puzzled is that the copy process took exactly the same amount of time to complete, and used nearly the same amount of host CPU. I repeated the tests several times, and each time I got the exact same results…20 minutes.

Since there’s virtually no physical disk I/O going on here, I would expect a dramatic increase in storage vMotion performance. Because these results are very surprising and unexpected, I contacted 3PAR and I will see if engineering can shed some light on this situation. Other vendors claim a 25% increase in storage vMotion performance when using VAAI. Clearly 0% is less than 25%. When I get clarification on what’s going on here, I will be sure to follow up.

Update: 3PAR got back to me about my observations, and confirmed what I’m seeing is correct. With firmware 2.3.1 MU2 XCOPY doesn’t reduce the observed wall clock time to “copy” empty space in a thinly provisioned volume. But as I noted, XCOPY does leverage the zero detection feature of their ASIC so there’s very little back-end I/O occuring for non-allocated chunklets.

So yes the current VAAI implementation reduces the I/O strain on the SAN and disk array, but doesn’t reduce the observed time to move the empty chunklets. In my environment the I/O loads are pretty darn low, so I’d prefer the best of both worlds…efficient copies and reduced observed copy times. If 3PAR could make the same dramatic performance gains of the ‘write same’ command for the XCOPY command, that would really be a big win for customers.

3PAR vSphere VAAI "Write Same" Test Results: 20x performance boost

So in my previous blog entry I wrote about how I upgraded a 3PAR T400 to support the new VMware vSphere 4.1 VAAI extensions. I did some quick tests just to confirm the array was responding to the three new SCSI primitives, and all was a go. But to better quantify the effects of VAAI I wanted to perform more controlled tests and share the results.

Environment
First let me give you a top level view of the test environment. The host is an 8 core HP ProLiant blade server with a dual port 8Gb HBA, dual 8Gb SAN switches, and two quad port 4Gb FC host facing cards in the 3PAR (one per controller). The ESXi server was only zoned to two ports on each of the 4Gb 3PAR cards, for a total of four paths. The ESXi 4.1 Build 320092 server was configured with native round robin multi-pathing. The presented LUNs were 2TB in size, zero detect enabled, and formatted with VMFS 3.46 and using an 8MB block size.

Testing Methodology
My testing goal was to exercise the XCOPY (SCSI opcode 0x83) and write same (SCSI opcode 0x93). To test the write same extension, I wanted to create large eager zeroed disks, which forces ESXi to write all zeros to the entire VMDK. Normally this would take a lot of SAN bandwidth and time to transfer all of those zeros. Unfortunately I can’t provide screen shots because the system is in production, so you will have to take my word for the results.

“Write Same” Without VAAI:
70GB VMDK 2 minutes 20 seconds (500MB/sec)
240GB VMDK 8 minutes 1 second (498MB/sec)
1TB VMDK 33 minutes 10 seconds (502MB/sec)

Without VAAI the ESXi 4.1 host is sending a total 500MB/sec of data through the SAN and into the 4 ports on the 3PAR. Because the T400 is an active/active concurrent controller design, both controllers can own the same LUN and distribute the I/O load. In the 3PAR IMC (InForm Management console) I monitored the host ports and all four were equally loaded around 125MB/sec.

This shows that round-robin was functioning, and highlights the very well balanced design of the T400. But this configuration is what everyone has been using the last 10 years..nothing exciting here except if you want to weight down your SAN and disk array with processing zeros. Boorrrringgg!!

Now what is interesting, and very few arrays support, is a ‘zero detect’ feature where the array is smart enough on thin provisioned LUNs to not write data if the entire block is all zeros. So in the 3PAR IMC I was monitoring the back-end disk facing ports and sure enough, virtually zero I/O. This means the controllers were accepting 500MB/sec of incoming zeros, and writing practically nothing to disk. Pretty cool!

“Write Same” With VAAI: 20x Improvement
70GB VMDK 7 seconds (10GB/sec) 
240GB VMDK 24 seconds (10GB/sec)
1TB VMDK 1 minute 23 seconds (12GB/sec)

Now here’s where your juices might start flowing if you are a storage and VMware geek at heart. When performing the exact same VMDK create functions on the same host using the same LUNs, performance was increased 20x!! Again I monitored the host facing ports on the 3PAR, and this time I/O was virtually zero, and thanks to zero detection within the array, almost zero disk I/O. Talk about a major performance increase. Instead of waiting over 30 minutes to create a 1TB VMDK, you can create one in less than 90 seconds and place no load on your SAN or disk array. Most other vendors are only claiming up to 10x boost, so I was pretty shocked to see a consistent 20x increase in performance.

In conclusion I satisfied myself that 3PAR’s implementation of the “write same” command coupled with their ASIC based zero detection feature drastically increases creation performance of eager zeroed VMDK files. Next up will be my analysis of the XCOPY command, which has some interesting results that surprised me.

Update: I saw on the vStorage blog they did a similar comparison on the HP P4000 G2 iSCSI array. Of course the array configuration can dramatically affect performance, so this is not an apples to apples comparison. But nevertheless, I think the raw data is interesting to look at. For the P4000 the VAAI performance increase was only 4.4x better, not the 20x of the 3PAR. In addition, the VDMK creation throughput is drastically slower on the P4000.

Without VAAI:
T400 500MB/sec vs P4000 104MB/sec (T400 4.8x faster)

With VAAI:
T400 10GB/sec vs P4000 458MB/sec (T400 22x faster)

© 2017 - Sitemap