Archives for September 2011

Unattended vSphere Utility Installs

Sometimes you may want to install the various vSphere utilities (PowerCLI, vSphere CLI, vSphere Client, and VUM PowerCLI) to non-default directories, or use a silent/unattended installation to automate the process.

Below are four batch files that you can run which will install the respective tool to the custom installation directory specified. What’s cool about the batch file is you can double click on the batch file from any path and it will CD to the location of the installer and run it. If you are using Windows Server 2008/R2 with UAC, you will be prompted to elevate to do the installation, but otherwise there is no interaction required.

The VUM PowerCLI extensions can’t be configured for a custom installation directory, so it will just silently install to the default location. You could of course also combine all of the commands and install all of the tools with a single click, silently.

I also included a silent installation of OpenSSL, which can be handy for creating ESXi, vCenter and VUM certificates.
—-
cd /d %0..
start /wait VMware-PowerCLI-5.0.0-435426.exe /q /s /w /L1033 /V” /qr INSTALLDIR=”D:Program Files (x86)VMwareInfrastructurevSphere PowerCLI”
—-
cd /d %0..
start /wait VMware-viclient-all-5.0.0-455964.exe /q /s /w /L1033 /v” /qr INSTALLDIR=”D:Program Files (x86)VMwareInfrastructure”
—-
cd /d %0..
start /wait VMware-VSphere-CLI-5.0.0-422456.exe /s /v”/qb INSTALLDIR=”D:Program Files (x86)VMwareVmware vSphere CLI\””
—-
cd /d %0..
start /wait VMware-UpdateManager-Pscli-5.0.0-432001 /q /s /w /L1033 /V” /qr 
—-
cd /d %0..
Vcredist_x64.exe /q /norestart
Win64OpenSSL-1_0_0d.exe /verysilent /sp-

vSphere 5.0 Documentation Links

For those of you that want easy access to vSphere 5.0 documentation, I stumbled upon a location that has well organized PDF and e-book resources. No more need to search all over VMware’s site for a specific piece of documentation. You can check out the link here.

VMware documentation

VSP3116: Resource Management Deep Dive

I finally managed to get into a session by one of the VMware rockstars, Frank Denneman, who has co-authored several books that I highly recommend. Frank stated this topic could be a four day class alone, and this was just an hour, so it went quite quickly and just scratched the surface of the topic at hand. But nonetheless it was informative.

Highlights:

  • Resource entitlement
    • Dynamic: CPU and memory
    • Static: Shares, reservations, limits
  • Short term contention
    • Load correlation – Where two servers ramp up/down together (e.g. web and SQL)
    • Load synchronicity – All servers hammered at once (user logon storm at 8am)
    • Brown outs – System wide virus scanning at the same time
  • Long term contention
    • Ultra high consolidation ratios
    • Hardware limits exceeded
    • Massive overcommitment
  • VM-Level shares
    • Low (1), Normal (2), High (4)
  • VM CPU Reservation
    • Guarantees resources
    • Influences admission control
    • CPU does not use resources when VM doesn’t need processing time (fully refundable)
    • CPU reservation does not equate to priority
  • VM Memory Reservation
    • Guarantees a level of resources
    • Influences admission control
    • Non-refundable. Once allocated it remains allocated.
    • Will reduce consolidation ratios
  • VM Limits
    • Applies even when there are enough resources
    • Often more harmful than helpful (don’t use them often unless you like a hole in your foot)
    • Can very likely lead to negative impacts since the guest OS is not aware of the limits
    • Any extra memory the guest OS wants comes from swap (after TPS, memory compression), which is very slow.
    • De-schedules the CPU even if their are resources available and the VM wants them
  • DRS treats a cluster as one large host
  • Resource pools – Do not place VMs at the same level in the vCenter hierarchy as a resource pool. Always put VMs inside the appropriate resource pool.
  • Simple method to estimate resource pool shares
    • Step 1: Match defined SLA to pool (.e.g. 70 to production, 20 to test, 10 to dev)
    • Step 2: Make up shares per VM (.e.g. 70/Prod, 20/test, 10/dev).
    • Step 3: Based on the number of vCPUs per pool multiply  shares per VM * vCPUs
      • E.g. 10 vCPUs for Prod = 700 shares; 5 vCPUs for test = 100 shares; 20 vCPUs for dev = 200 shares.
    • Schedule at task to do these calculations and set the shares per pool on a nightly basis. As you add VMs and change vCPUs new calculations are needed. Check out Frank’s blog for an example script.
  • When you configure pool limits remember that each VM has overhead, which is between 5-10% of the total memory. Less VM overhead in ESXi 5.0 than previous versions.
  • Use resource pool limits with care as they can do more harm than good.
  • DRS affinity rules
    • Must run on – Cannot violate under any circumstances. You cannot even power on the VM if it’s on the wrong host. Always honored, even though HA events like host failures.
    • Should run on – Can be violated as needed, such as during HA events.
    • NOTE: You must disable Must Run On or Should Run On rules BEFORE you disable DRS, as those settings are honored even when DRS is disabled and you can’t change the rules when DRS is disabled.
  • Distributed Power Manager (DPM)
    • Frank did a poll of the room and hardly anyone is using this feature.
    • vCenter looks at the last 40 minutes and the host must be completely idle to be suspended.
    • If vCenter senses a ramp up in resource requirements in the last 5 minutes it will take the server out of stand by.
    • DPM will NOT degrade system performance to save power

Resource pools, shares, limits and reservations can be quite complicated. I strongly recommend checking out Frank’s books for a lot more details.

VSP3111: Nexus 1000v Architecture, Deployment, Management

This session focused on the Cisco distributed virtual switch, the Nexus 1000v. The speaker was very knowledgeable and a great presenter. Lots of great details, but as fast as he was going I didn’t get all of the details. You can check out the his blog at jasonnash.com.

Highlights:

  • The VSM is a virtual supervisor module, which acts as the brains of the switch just like a physical switch.
  • The VEM is a virtual ethernet module, which is in essence, a virtual line card that resides on each ESXi host.
  • VSM to VEM communications are critical and you have various deployment options
    • Layer 2 only: Uses two to three VLANs and is the default option, and the most commonly deployed architecture.
    • Layer 3: Utilizes UDP communications over port 4785, so it can be routed
  • When in layer 2 mode you need to configure the control, management and packet networks
    • Management: End point that you SSH into to manage the VSM and maintains contact to vCenter. Needs to be routable.
    • Control: VSM to VEM communications (This is where most problems occur.)
    • Packet: Used for CDP and ERSPAN traffic
  • Nexus 1000v deployment best practices
    • Locate each VSM on different datastores
    • You CAN run vCenter on a host that utilizes the N1K DVS
    • ALWAYS, ALWAYS run the very latest code. Latest as of Sept 1, 2011 is 1.4a, which does work with vSphere 5.0.
    • Don’t clone or snapshot the VSM, but DO use regular Cisco config backup commands
    • Always, always deploy VSMs in pairs (no extra licensing cost, so you are dumb not to do it).
  • Port profile types
    • Ethernet profile: Used for physical NICs and are used as uplinks out of the server. These use uplink profiles.
    • vEthernet profile: Exposed as port groups in vCenter and is the most common type of administrative change made in the VSM.
  • Uplink teaming
    • N1Kv supports LACP, but the physical switch must support it as well.
    • vPC-HM – Requires hardware support from the switch and more complex to troubleshoot
    • vPC-HM w/ MAC pinning – Most common configuration and easy to setup/troubleshoot.
  • On Cisco switches enable BDPU filter and BDPU guard on physical switch ports that connect to N1K uplinks.
  • Configure VSM management, control, packet, Fault Tolerance, vMotion as “system” VLANs in the N1K so they are available at ESXi host boot time and don’t wait on the VSM to come up.
  • For excellent troubleshooting information check out Cisco DOC 26204.
  • You can also check out the N1KV v1.4a troubleshooting guide here.
  • The network team may prefer to use the Nexus 1010, which is a hardware appliance that runs the VSMs. This removes the VSM from the ESXi hosts, and could be better for availability, plus the network guys can use a serial cable into the 1010. You would deploy 1010s in pairs, and they have bundles that really bring down the price.
  • You can deploy multiple VSMs on the same VLANs, but just be sure to assign each VSM pair a different “DOMAIN” ID.

Not mentioned in this session are additional Cisco products that layer on top of the 1000v, such as the forthcoming Virtual ASA (firewall), a virtual NAM, and the virtual secure gateway. The ASA is used for edge protection while the VSG would be used for internal VM protection.

© 2017 - Sitemap