Archives for 2011

Get your free 2 socket Veeam Backup 6.0 License key

Veeam is once again running a Christmas special where you can get a free NFR (not for resale) Veeam Backup 6.0 license key for 2 sockets which is good for 1 year. You can select VMware, Hyper-V, or both. All you need to do is fill out this form and wait for the email with the license key.

For home labs or just trying out Veeam without their more limited timed trial versions, this is a great opportunity. Even if you don’t think you will use the key, I’d grab one anyway since you never know what may come up over the next year where it could come in handy.

Veeam is targeting the offer at certified VMware professionals such as VCP, but they don’t require any identifying information.

Schedule VMware UMDS Downloads with Windows Task Scheduler

VMware UMDS (Update Manager Download Service) is a product which ships with vCenter that allows you to download patches for VUM, then use them in air-gapped networks where VUM can’t directly download updates from the internet. UMDS 5.0 has some nice enhancements from 4.x, which helps limit the amount of unnecessary patches it downloads.

If you work in an environment where you need to utilize VMware UMDS you probably want the download process to be as automated as possible. One of the missing features in UMDS is a scheduler, so that’s where you can leverage the Windows Task Scheduler to make life easier. I’m assuming you are using Windows Server 2008 R2 and have already installed UMDS on a server. Creating the task is pretty quick and simple.

1. Launch the Task Scheduler. To keep the tasks more organized, I created a folder at the root level under Microsoft called VMware.

2. Right click on VMware and select Create Task. On the General tab you configure the basic task information. I recommend you change the task to use a privileged account, such as SYSTEM, and check the box to run the task with highest privileges. You could configure service account with only the required rights and use that instead, if running a task with SYSTEM rights doesn’t sit well with you.

3. Click on the Triggers tab and add a new trigger. I like to run the task once a day, at 4AM.

4. On the Actions tab create a new action and configure it as shown below. Change the path as needed to the location of the vmware-umds.exe file. Be sure to add the argument of -D or nothing will happen with the task runs.
5. On the Settings tab I changed the parameter that limits how long the task can run, but this is optional, and you can use any value you want.

6. At this point the task is configured and you can click OK. To test it out you can right click on the task and Run it. Monitor the download directory you configured and make sure it is being populated. For ESXi 4.1.0 and ESXi 5.0 patches, the UMDS repository at this time of this article was about 1.5GB.

Once the download task completes, you then need to “export” the repository into a directory, copy to removable media, and upload to the air-gapped instance of VUM. You can find all of the gory details in the Installing and Administering VMware vSphere Update Manager 5.0 Guide.

XenDesktop 5.5 Resource Calculator

Of course after a weekend of creating my own spreadsheet to calculate my storage requirements for XenDesktop Andre Leibovici created a XenDesktop version of his View calculator. This is a great resource for sizing your storage, calculating IOPS, number of datastores, and other details. A sample of the fields is below.

If you are using XenDesktop MCS, this is a must-use calculator. He says a PVS version is coming as well, so if you aren’t a MCS user then check back with his site for an update.

Tips for measuring Windows 7 VDI IO Requirements

When sizing your storage subsystem for a VDI implementation, it’s extremely critical to understand how your VMs will behave and the resulting IO load. Miscalculate and you will suffer poor performance and angry users. Oversize your array and you will waste money. However, measuring your VM performance may not be as straight forward as you think.

A few months ago I posted a script here that let you dump basic IO performance stats for a VM on vSphere 4 and 5. But as you will see, the applications you load into your image and when you measure the performance has a significant impact on the collected metrics.

For my first round of tests I wanted to focus on the boot performance of Windows 7 64-bit. Booting can be one of the most taxing events (aside from full virus scans) on your VDI storage subsystem. Even if you stagger your VM boots over a few hours as a normal practice, what if you have a power outage or significant hardware failure and you need to rapidly power on hundreds of VMs? Will your storage array melt under the load? Will Windows boot so slowly that it will blue screen (hint: Windows 7 VMs should boot under 5 minutes to avoid problems.) SLAs play an important role here and you need to be mindful of them and verify they can be met.

The test environment is pretty basic and includes vSphere ESXi 5.0, XenDesktop 5.5, and Windows 7 64-bit. The IO measurements were performed over a five minute period after powering on the VM, and metrics were collected via my script. The measurements are only for boot IOs, as no user logged into the VM during the collection process. Tests were performed four times for each scenario and the results averaged. Five scenarios were tested:

  • Base Image: Windows 7 64-bit, Office 2010, VMware tools, joined to a domain
  • VDA Only: XenDesktop 5.5 Virtual Desktop Agent
  • VDA/Symantec: Citrix VDA and Symantec End Point Protection 12.1
  • Optimized: Quest vWorkspace Desktop Optimizer applied with all settings enabled except 15, 26, 27, 30; most VMware Windows 7 optimizations applied.
  • XenDesktop VM: VM created with XenDesktop 5.5 MCS from the optimized template

Drum roll for the results please!

As you can see in the table above, the base Win7 image required an average of nearly 15,000 IOs to boot. 15% of those IOs were writes, while the remainder were reads. Simply installing the XenDesktop VDA decreased the number of write IOs, but increased overall IOs by 17% over the base image. Next up is installing Symantec 12.1, and wow look at those numbers jump! 212% increase in IOs over the base image. Using the Quest and VMware recommended optimizations IOs dropped a bit, but nothing substantial.

What I found to be very interesting is what happened to the IOs when the optimized VM template was cloned by XenDesktop MCS and booted as part of a desktop pool. Zero changes were made to the VM, so the only difference is how the VM behaves when under the control of the Citrix Desktop Studio. Approximately 8000 more IOs are required during the boot process, and a lot more writes are taking place. I would not have guess that large of a delta, so this is an interesting find. The read/write ratio also drops to approximately 80/20.

So what does all of this mean? First, every environment is very unique and you should not use my results, or anyone elses, to estimate the IO load for your environment. Second, take your metrics from a provisioned VDI VM (VMware View, XenDesktop, etc.) and don’t just take measurements from your VM template. Third, booting a VM is very IO intensive and if you only size your storage for steady-state IOPS, then boot storms will cause you major headaches.

Depending on the script/method you use to gather the VM IOPS stats, VMware may not always return the read/write stats in the same fashion resulting in the same order, so you may see inverted data. From my observation this happens on a per-VM basis, even through reboots and power on/off cycles. So if your data looks odd, question it, don’t assume everything is legit.

Unattended vSphere Utility Installs

Sometimes you may want to install the various vSphere utilities (PowerCLI, vSphere CLI, vSphere Client, and VUM PowerCLI) to non-default directories, or use a silent/unattended installation to automate the process.

Below are four batch files that you can run which will install the respective tool to the custom installation directory specified. What’s cool about the batch file is you can double click on the batch file from any path and it will CD to the location of the installer and run it. If you are using Windows Server 2008/R2 with UAC, you will be prompted to elevate to do the installation, but otherwise there is no interaction required.

The VUM PowerCLI extensions can’t be configured for a custom installation directory, so it will just silently install to the default location. You could of course also combine all of the commands and install all of the tools with a single click, silently.

I also included a silent installation of OpenSSL, which can be handy for creating ESXi, vCenter and VUM certificates.
cd /d %0..
start /wait VMware-PowerCLI-5.0.0-435426.exe /q /s /w /L1033 /V” /qr INSTALLDIR=”D:Program Files (x86)VMwareInfrastructurevSphere PowerCLI”
cd /d %0..
start /wait VMware-viclient-all-5.0.0-455964.exe /q /s /w /L1033 /v” /qr INSTALLDIR=”D:Program Files (x86)VMwareInfrastructure”
cd /d %0..
start /wait VMware-VSphere-CLI-5.0.0-422456.exe /s /v”/qb INSTALLDIR=”D:Program Files (x86)VMwareVmware vSphere CLI\””
cd /d %0..
start /wait VMware-UpdateManager-Pscli-5.0.0-432001 /q /s /w /L1033 /V” /qr 
cd /d %0..
Vcredist_x64.exe /q /norestart
Win64OpenSSL-1_0_0d.exe /verysilent /sp-

vSphere 5.0 Documentation Links

For those of you that want easy access to vSphere 5.0 documentation, I stumbled upon a location that has well organized PDF and e-book resources. No more need to search all over VMware’s site for a specific piece of documentation. You can check out the link here.

VMware documentation

VSP3116: Resource Management Deep Dive

I finally managed to get into a session by one of the VMware rockstars, Frank Denneman, who has co-authored several books that I highly recommend. Frank stated this topic could be a four day class alone, and this was just an hour, so it went quite quickly and just scratched the surface of the topic at hand. But nonetheless it was informative.


  • Resource entitlement
    • Dynamic: CPU and memory
    • Static: Shares, reservations, limits
  • Short term contention
    • Load correlation – Where two servers ramp up/down together (e.g. web and SQL)
    • Load synchronicity – All servers hammered at once (user logon storm at 8am)
    • Brown outs – System wide virus scanning at the same time
  • Long term contention
    • Ultra high consolidation ratios
    • Hardware limits exceeded
    • Massive overcommitment
  • VM-Level shares
    • Low (1), Normal (2), High (4)
  • VM CPU Reservation
    • Guarantees resources
    • Influences admission control
    • CPU does not use resources when VM doesn’t need processing time (fully refundable)
    • CPU reservation does not equate to priority
  • VM Memory Reservation
    • Guarantees a level of resources
    • Influences admission control
    • Non-refundable. Once allocated it remains allocated.
    • Will reduce consolidation ratios
  • VM Limits
    • Applies even when there are enough resources
    • Often more harmful than helpful (don’t use them often unless you like a hole in your foot)
    • Can very likely lead to negative impacts since the guest OS is not aware of the limits
    • Any extra memory the guest OS wants comes from swap (after TPS, memory compression), which is very slow.
    • De-schedules the CPU even if their are resources available and the VM wants them
  • DRS treats a cluster as one large host
  • Resource pools – Do not place VMs at the same level in the vCenter hierarchy as a resource pool. Always put VMs inside the appropriate resource pool.
  • Simple method to estimate resource pool shares
    • Step 1: Match defined SLA to pool (.e.g. 70 to production, 20 to test, 10 to dev)
    • Step 2: Make up shares per VM (.e.g. 70/Prod, 20/test, 10/dev).
    • Step 3: Based on the number of vCPUs per pool multiply  shares per VM * vCPUs
      • E.g. 10 vCPUs for Prod = 700 shares; 5 vCPUs for test = 100 shares; 20 vCPUs for dev = 200 shares.
    • Schedule at task to do these calculations and set the shares per pool on a nightly basis. As you add VMs and change vCPUs new calculations are needed. Check out Frank’s blog for an example script.
  • When you configure pool limits remember that each VM has overhead, which is between 5-10% of the total memory. Less VM overhead in ESXi 5.0 than previous versions.
  • Use resource pool limits with care as they can do more harm than good.
  • DRS affinity rules
    • Must run on – Cannot violate under any circumstances. You cannot even power on the VM if it’s on the wrong host. Always honored, even though HA events like host failures.
    • Should run on – Can be violated as needed, such as during HA events.
    • NOTE: You must disable Must Run On or Should Run On rules BEFORE you disable DRS, as those settings are honored even when DRS is disabled and you can’t change the rules when DRS is disabled.
  • Distributed Power Manager (DPM)
    • Frank did a poll of the room and hardly anyone is using this feature.
    • vCenter looks at the last 40 minutes and the host must be completely idle to be suspended.
    • If vCenter senses a ramp up in resource requirements in the last 5 minutes it will take the server out of stand by.
    • DPM will NOT degrade system performance to save power

Resource pools, shares, limits and reservations can be quite complicated. I strongly recommend checking out Frank’s books for a lot more details.

VSP3111: Nexus 1000v Architecture, Deployment, Management

This session focused on the Cisco distributed virtual switch, the Nexus 1000v. The speaker was very knowledgeable and a great presenter. Lots of great details, but as fast as he was going I didn’t get all of the details. You can check out the his blog at


  • The VSM is a virtual supervisor module, which acts as the brains of the switch just like a physical switch.
  • The VEM is a virtual ethernet module, which is in essence, a virtual line card that resides on each ESXi host.
  • VSM to VEM communications are critical and you have various deployment options
    • Layer 2 only: Uses two to three VLANs and is the default option, and the most commonly deployed architecture.
    • Layer 3: Utilizes UDP communications over port 4785, so it can be routed
  • When in layer 2 mode you need to configure the control, management and packet networks
    • Management: End point that you SSH into to manage the VSM and maintains contact to vCenter. Needs to be routable.
    • Control: VSM to VEM communications (This is where most problems occur.)
    • Packet: Used for CDP and ERSPAN traffic
  • Nexus 1000v deployment best practices
    • Locate each VSM on different datastores
    • You CAN run vCenter on a host that utilizes the N1K DVS
    • ALWAYS, ALWAYS run the very latest code. Latest as of Sept 1, 2011 is 1.4a, which does work with vSphere 5.0.
    • Don’t clone or snapshot the VSM, but DO use regular Cisco config backup commands
    • Always, always deploy VSMs in pairs (no extra licensing cost, so you are dumb not to do it).
  • Port profile types
    • Ethernet profile: Used for physical NICs and are used as uplinks out of the server. These use uplink profiles.
    • vEthernet profile: Exposed as port groups in vCenter and is the most common type of administrative change made in the VSM.
  • Uplink teaming
    • N1Kv supports LACP, but the physical switch must support it as well.
    • vPC-HM – Requires hardware support from the switch and more complex to troubleshoot
    • vPC-HM w/ MAC pinning – Most common configuration and easy to setup/troubleshoot.
  • On Cisco switches enable BDPU filter and BDPU guard on physical switch ports that connect to N1K uplinks.
  • Configure VSM management, control, packet, Fault Tolerance, vMotion as “system” VLANs in the N1K so they are available at ESXi host boot time and don’t wait on the VSM to come up.
  • For excellent troubleshooting information check out Cisco DOC 26204.
  • You can also check out the N1KV v1.4a troubleshooting guide here.
  • The network team may prefer to use the Nexus 1010, which is a hardware appliance that runs the VSMs. This removes the VSM from the ESXi hosts, and could be better for availability, plus the network guys can use a serial cable into the 1010. You would deploy 1010s in pairs, and they have bundles that really bring down the price.
  • You can deploy multiple VSMs on the same VLANs, but just be sure to assign each VSM pair a different “DOMAIN” ID.

Not mentioned in this session are additional Cisco products that layer on top of the 1000v, such as the forthcoming Virtual ASA (firewall), a virtual NAM, and the virtual secure gateway. The ASA is used for edge protection while the VSG would be used for internal VM protection.

Did you know about Cisco UCS Express?

Today while I was walking around the vendor expo at VMworld 2011, I saw a very interesting product from Cisco. I was familiar with the datacenter UCS product, but a mini version caught my attention. Called UCS Express this is a micro ESXi server that slides directly into their ISR G2 chassis, which is a branch office router.

This is a micro server that currently supports dual cores and upto 8GB of RAM with two 1TB HDs. With RAID 1 you get about 500GB of usable space. It will run ESXi, so you could put very lightweight services like AD/DNS, DHCP or print server out at your branch office without deploying a full rack mount server. List price is around $4K for just the mini server, which isn’t bad. On the road map is a double wide server which will support more cores and up to 48GB of RAM. That should be coming in 2012.

They will also be working on a centralized management console, so if you have a lot of these micro servers on your network you have a single pane of glass to manage them through.

If your business has remote offices with limited space, and you only need very minimal Windows services, then this could be a great option for you. I don’t think this product gets much press, as I had never seen it before.

BCO1946: Making vCenter highly available

vCenter is a business critical service that when it goes down can cause substantial chaos, although VMs will happily keep running while it is down. Using VDI? Forget spinning up new VMs, or rebooting VMs. Using vCD? Forget doing anything while it’s down. HA? Yes that will keep working (for one failure in 4.x, and indefinitely in 5.0). So this session focused on various means to make vCenter highly available since it has no built-in means to be HA, so a little help is needed.

  • Linked mode does NOTHING for HA. A little data is replicated between the various instances, but if one of the vCenters goes offline you can’t manage the hosts it was servicing.
  • You really need to establish RTO and RPOs for vCenter so you know what to design for.
  • Other infrastructure like AD, DNS and SQL are critical and must be available. Also remember that network connectivity must be maintained.
  • The main options the speaker offered up as HA solutions are:
    • Traditional backup and restore: Does your backup solution need vCenter to do restores? This is a manual recovery process and doesn’t help with planned downtime like OS patching. You need a DR plan in place. See VMware KB 1023985 for some tips.
    • Cold Standby: Easy if SQL DB is local, RTO can be shorter, but a manual recovery process. Harder to do if physical.
    • Windows Clustering: Not supported by VMware as vCenter is not cluster aware.
    • VMware HA and APIs: Neverfail and Symantec offer clustering/HA products. These are incomplete as their process monitoring is very basic and does not cover all scenarios. Better than nothing, but far from complete. Fairly automated, compliments HA, and fairly easy.
    • VMware vCenter heartbeat:
      • Not the cheapest solution, but it is the most comprehensive
      • Active/passive configuration, share nothing model
      • Protects against OS, HW, application and network failures
      • Can be triggered on performance degradation of vCenter
      • Protects against planned and unplanned downtime
      • Protects vCenter plug-ins, SQL databases (even if on separate server) and VUM
      • Works across the LAN or WAN
      • Limited to a 1:1 topology

The bottom line is if you want an automated and comprehensive vCenter protection mechanism you are really left with one option, vCenter Heartbeat. I did a quick evaluation of it a couple of years ago before it supported 64-bit operating systems and the GUI/installation had a lot of room for improvement. I haven’t tried newer releases, so I hope it feels more like an integrated product than the Neverfail engine bolted on to some VMware customizations.

© 2017 - Sitemap