VMworld 2015 Thoughts

So literally just a few minutes ago I landed in San Diego, back from another long week at VMworld. This was particularly long, as on the Saturday before all the festivities there was a VCDX town hall meeting. There, we got to meet Pat Gelsinger (CEO) and a number of VMware CTOs. It was a very interactive session, mostly Q&A with questions from the attending VCDXs. Unfortunately a mass closure of the 101 prevented or delayed some coming down from SF. I think the town hall was fun, and I hope it becomes a regular part of VMworld. A big thanks to the organizers!

Sunday was also quite busy. First, it started out with breakfast at Mel’s. Apparently I wasn’t social enough, as I got seated at the bar and didn’t really talk to anyone. So didn’t get much out of that. I did run into several friends right outside of Mel’s as I was leaving. All afternoon was three one hour presentations at Opening Acts. These were panel discussions with well known people, about various topics such as storage, careers, and infrastructure. No blog posts about those, since transcribing real time panels is beyond difficult unless you are a pro court reporter. You can find the session list and speakers here. If you want to check out the videos, find them on YouTube here.

Sunday night started ‘party central’ at VMworld. I spent most of the time at the Nutanix party, mingling with our plethora of VCDXs, execs, customers, and partners. Great time! I wanted to make it to #VMunderground party at 8PM, but sadly didn’t make it over there. Next year!

Monday morning started off bright and early at 8am with a session about vCenter Server Appliance, and how it should be your first choice. Next up was the general session. To be frank, I was a bit disappointed at the general sessions lately. No ‘big bang’ or ‘wow’ announcements from VMware. vSphere 6.0 was old news (released in March 2015), and they didn’t talk much about vSphere v.Next. They did emphasize containers, hybrid clouds, cloud native apps, and EUC. One of my favorite VMware execs, Kit Colbert, made a great appearance. I really respect that guy! You can find recordings of all general sessions here.

The remainder of the day was spent attending sessions. You can find real time blog posts of nearly all the sessions I attended if you select the VMworld 2015 filter on the right side of my blog. Monday night was also party central. Frankly I’ve forgotten which parties I attended, but it was fun mingling with fellow geeks, getting recognized, and talking to blog fans.

Tuesday was more of the same, with an early general session, and a bunch more technical sessions. Of course more parties at night! Too much of a blur to recount where all I went. Nutanix also had a dinner that I attended, after which I headed to the vExpert/VCDX party sponsored by VMware.

Wednesday I was disappointed I was unable to attend the annual breakfast with Calvin from HP at Sears, due to a Nutanix session at 8AM. That was a great panel discussion with 3 customers and Josh Odgers, talking about their real world experiences with the Nutanix platform. I actually did transcribe most of that session, which you can find here. I did attend a session on vSphere certificates, which I found quite interesting. Certificates in vSphere 6.0 are quite different from prior versions, and for the better. Wednesday night I attended a customer dinner, then headed straight to the VMworld party.

Thursday was a little bit slower, as it was the final conference day. The general session was very interesting. Surprisingly one of the “TED” style talks involved dunking a live cockroach in cold water, snipping off its leg (while it was alive), and doing some demos with it. I hope PETA doesn’t get wind of that session. They left the cockroach in the cold water, so RIP #VMworld cockroach. Three very interesting talks, which you can check out on the video link above. The remainder of the day was technical sessions. Then I ran off to the airport, where I ran into Forbes Guthrie and we had a nice 90 minute chat.

The best part of VMworld is the community. Between vBrownbag, blog fans, meeting other bloggers, talking to book authors, networking, meeting Nutanix customers, etc. it’s a great event even if the announcements were a bit ‘ho hum’.


VMware was very cagey about what’s coming in the next major version of vSphere, due out in 2016. One advance forward is built-in vCenter HA through an active/passive configuration. They will also eliminate the need for a 3rd party load balancer for PSCs, and build in native PSC HA. All good news! vCenter still needs a major overhaul to make it web scale active/active scale out, plus a full HTML5 interface (which they did commit to, but timing sounded like a couple of major versions away).

They did leak some info about 6.0 U1, which is due out in Q3. It will have some nifty features like a GUI for the PSC and certificate management, ability to move from an embedded PSC to external PSC, and other usability enhancements. Finally support for SQL 2012 Always on Availability groups for vCenter!

I didn’t attend many sessions on other VMware products like the vRealize suite or NSX, so those upcoming versions may also have some nifty new features. Containers, hybrid cloud and big data were also hot topics, but didn’t have time to attend those sessions.

I still can’t get over the great community at VMworld, and meeting a lot of great people. I had a blast, and look forward to VMworld 2016 in Las Vegas.

VMworld 2015: vSphere 6.0 in the Real World

Session INF4712

Compatibility Maximums – Review the document and stay within the guidelines.

vCenter 6 Platform choice: Windows and VCSA support same maximums and performance

  • Up to you, but look at things like Linux experience, licensing, existing skills, etc.

vCenter – New deployment architecture

  • PSC – SSO, License service, lookup service, vmdir, VMCA
  • vCenter – web client, inventory service, auto deploy, ESXi dump collector, syslog collector, etc.

PSC – Which architecture?

  • Embedded: Single site, no expansion past one vCenter
  • External: Supports up to 4 vCenters. HA mode is much more complex (3rd party load balancer)
  • Multiple sites – PSCs in each site, and replicate with each other.
  • Max size: 6 PSCs, 3 sites, 10 vCenters
  • Once a deployment model is chosen, you can’t change it in 6.0. U1 will allow changes.

VMware Certificate Authority – Favorite feature.

  • VMCA removes a lot of the certificate complexity
  • No longer uses self signed certificates
  • Built into the PSC
  • VMCA should you use it? Yes.
  • See KB 2111219 or my vSphere 6.0 install guide here

Standard vs. Distributed Switches

  • Always use VDS if you are licensed for it
  • Many of the past issues with VDS are now no longer an issue


  • Policy based storage management
  • Not all vSphere hardware is supported. Carefully check HCLs.
  • Learning curve for operational procedures and recovery
  • May require new hardware purchase

SMP Fault Tolerance

  • Long awaited SMP support (up to 4 vCPU)
  • Basically a continuous vMotion that only stops when there’s a hardware failure
  • 10Gb NIC requirement
  • Max 4 FT VMs per host

Content Library

  • New to vSphere 6.0
  • Storage for templates, appliances, ISOs, scripts, etc.
  • Should you use it? Definitely


VMworld 2015: Certificates for Mere Mortals

Session INF4529

Note: Although not mentioned in this session, I have a SSL toolkit for vSphere 6.0 which makes the replacement process easier. Check out my vSphere 6.0 install guide here for all the details.

Certificate Lifecycle Management

  • VMCA: VMware certificate authority
  • VECS: VMware Endpoint Certificate store


  • Dual Operational modes: Root CA and Issuer CA
  • Root CA: Automated, can issue other certs, all solutions and endpoint certificates are created and trusted to this root cert
  • Issuer CA: Can replace all default root CA certificate created during installation. Basically subordinate CA to your enterprise CA.


  • Repository for certificates and private keys
  • Mandatory component
  • Key stores: machine SSL certs, trusted roots, CRLs, solution users, others (e.g. VVOLS).
  • Managed through veccs-CLI
  • Does not manage SSO certificates

vSphere 6.0 Certificate Types

  • ESXi certificates – autogenerated post-install. New modes in 6.0, one of which can use VMCA certs. Can renew in webclient.
  • Machine SSL certificates – Creates server-side SSL (HTTPS, LDAP, etc.). Each node has its own machine SSL certificate.
  • Solution User certificates – Machine, vpxd, vpxd-extension, vsphere-webclient. Encapsulates one or more vCenter services.
  • Single-sign-on: Not stored in VECS. Stored in filesystem. STS certificate. Renew/update via GUI, not filesystem replacement.

Certificate Replacement Options

  • VMCA as root. Easiest deployment option.
  • VMCA as Enterprise CA subordinate – VMCA will issue certs on behalf of your enterprise CA
  • Custom CA – Only use custom certs all around. Not recommended except for Gov’t/Financial.
  • Hybrid – User facing certs replace, then let VMCA manage solution user and ESXi certs.

VMware vSphere 6.0 Certificate Manager

  • Available on both Windows and VCSA
  • Menu driven (GUI in 6.0 U1)

VMCA as Subordinate

  • RSA with 2048 bits
  • x.509v3
  • SHA256, 384 or 512
  • No wildcards in SubjectAltName
  • Cannot create subsidiary CAs of VMCA
  • Sync time for all nodes

Session videos, slides and scripts: http://vmware.com/go/inf4529


VMworld 2015: vCenter Server HA

Session INF4945

Why is vCenter HA important?

  • Primary administrative console
  • Critical component in end-to-end cloud provisioning
  • Foundation for VDI
  • Backup and DR solutions rely on vCenter
  • vCenter target availability is 99.99% from VMware’s design perspective (5 min a month)


Make every layer of the vCenter stack HA

  • vCenter DB
  • Host
  • SAN
  • Network
  • DC power and cooling

Reduce dependencies to improve nines

  • In moving from 5.1 and 5.5 to 6.0 you see a consolidation of vCenter services into VMs (e.g. just PSC and vCenter in 6.0)
  • vCenter 5.5 U3 supports SQL AAGs
  • vCenter 6.0 U1 supports SQL AAGs

Hardware/Host Failure protection: vSphere HA

  • Test tested solution
  • Protects against hardware failures
  • Some downtime for failover
  • Easy to setup and manage
  • DRS rules can be leveraged
  • High restart priority for vCenter components

Hardware/host failure protection: vSphere FT

  • Continuous availability with zero downtime and data loss
  • vCenter tested with FT for 4 vCPUs or less (only the ‘tiny’ and ‘small’ deployments fit)
  • About 20% overhead
  • Downtime during guest OS patching

Application failure protection: Watchdog

  • Watchdog monitors and protects vCenter applications
  • Automatically enabled on install on VCSA and Windows
  • On failure watchdog attempts to restart processes, if restart fails then VM is rebooted
  • Separate watchdog per vCenter server component

Application failure protection: Windows Server Failover clustering

  • Provides protection against OS level and application downtime
  • Provides protection for database
  • Some downtime during failure
  • Reduces downtime during OS patching
  • Tested with vCenter 5.5 and 6.0

Platform Services Controller HA

  • Two models: Embedded PSC or external PSC
  • PSC high availability in 6.0 requires a third party load balancer (removed in future vSphere versions)
  • Multiple PSC nodes in same site

vCenter Backup

  • Backup both embedded PSC and external PSC configurations
  • Recover from failures to vCenter node, PSC node or both
  • When vCenter node restored, it connects to PSC and reconciles the differences
  • When PSC node restored, it replicates from the other nodes
  • Uses VADP
  • Out-of-the box integration with VMware VDP

Tech Preview (vSphere 6.1?): Native HA

  • Native active-passive HA
  • Uses witness
  • No third party technology needed
  • Recover in minutes (target is 15 minutes), not hours
  • Protects against hardware, host and application failures
  • No shared storage required
  • 1-click automated HA setup
  • Fully integrated into the product
  • Out of box for the VCSA

VMworld 2015: vSphere 6 Certificates

Session INF4946


  • Why does VMware use PKI?
  • PKI – The good, bad and ugly
  • Chose your deployment to maximize operational security
  • Tech preview demonstration

Shows a slide of many recent companies that were hacked

Certificates are used in vSphere to maintain trust. Used for solution users, encryption and SAML tokens

Using PKI does not guarantee security. Security companies get hacked. Operational security can make PKI fail.

PKI: The Good, Bad and Ugly

The Good: Mature, robust, 30 years old, open, tried and trusted, can be automated and auditable

The Bad: Complex to implement, difficult to manage without automation

The Ugly: Not immune to vulnerabilities, CA compromise shatters PKI

Simplify: The vSphere Platform Services controller

  • VMware CA in PSC generates certs, generates CRLs, manages certificate lifecycle
  • VMware endpoint certificate store – stores certificates and keys, syncs trusted certs, syncs CRLs
  • VMware Directory Service – Stores identity resources, multi-master replication, domain structure, licensing, tagging
  • STS and SSO – Integrated Windows auth, AD integration, SAML tokens

vSphere 6.0 vCenter Certificates – Simplified

  • Root CA – VMCA root CA
  • Solution users – 4 certificates for 13+ services
  • STS signing cert
  • VMDir certificate
  • ESXi certificate

vSphere 6.0 ESXi Certificates

  • ESX auto-generates certificates at installation
  • Certs are stored locally, not in VECS
  • VMCA mode
  • Custom mode – with custom certs
  • Thumbprint mode – not recommended

VMCA Root CA and Machine SSL Certificates

  • Root CA – Validity 10 years, 2048 bit
  • Machine SSL –
  • Solution user – 10 years
  • ESXi cert – 5 years

Deployment Scenarios

  • VMCA as Root CA – easy and for most customers
  • VMCA as intermediate CA – can introduce some risk, but also easy.
  • Hybrid – very common. User facing certs are trusted, VMCA for solution users
  • No VMCA – Highly secure only (finance), very manual.

Certificate Management Tools

  • Certool – Command line interface
  • Certificate management utility – for Windows and Linux
  • Tech preview for 6.0 U1: PSC UI –
  • Tech preview Platform service SDK – client libraries for remote execution


  • Ability to upload and renew certificates in GUI

PKI: Deep Dive walkthough – Revocation

VMware services do not do revocation checking. You can delete the certs in the VMCA and the entire VMCA itself, though.

Tech Preview – vSphere Certificates and load balancers.

  • In 2016 vSphere will remove load balancer for HA PSCs.
  • Two PSCs per site are recommended

Tech Preview for Lifecycle management

  • PowerCLI for ESX host certificate replacement
  • Platform service SDK- C , Java, python

Project Lightwave

  • Open source VMCA, VECS, VMDirectory
  • On GitHub


VMworld: What’s new in vSphere 6.0?

Session INF5060

VMware’s architecture for IT: Any device, any application, one cloud

EVO SDDC is about deploying a new datacenter in less than two hours

Compute strategic imperatives: Cloud native infrastructure, hybrid cloud, virtualization leadership

vSphere 6.0 – Largest vSphere release ever

  • Shipped March 2015
  • 2x to 4x scale increase across the platform
  • Enhanced 2D/3D support with NVIDIA Grid
  • Rapid provisioning with 10x faster instance clone
  • Content library
  • More responsive web client
  • 64 hosts in a cluster
  • Long distance vMotion and cross-vCenter vMotion, SMP-FT
  • VMware integrated OpenStack
  • Extended containers support – CoreOS

Key stats:

  • 30% of customers are running 6.0
  • 100K downloads since GA

vSphere 6.0 U1

  • vCSA – easier to install and upgrade
  • Web client – VUM support
  • Faster maintenance mode -4x to 7x improvement
  • Certificate authority – CLI to UI
  • Live refresh in web client
  • VCSA performance increased by 20%
  • vSphere APIs for IO Filtering – 3rd party plug-ins

How does VMware enable containers?

  • Photon OS
  • Instant Clone
  • APIs for orchestration

VMware photon platform – future

  • Photon machine
  • Photon OS
  • Support for 100K containers or more
  • Available in 2016

Unified hybrid cloud allows best of both worlds

Cross-cloud vMotion and content sync with vCloud Air


VMworld 2015: Nutanix Customer Panel

Session SDDC6827

Note: This was a panel discussion, so I tried to transcribe as much as I could. Please forgive any typos or bad English. I’m immediately publishing after the session was over.

Josh Odgers – Nutanix
Sachin Chheda – Nutanix
Bob – Hallmark business connections (Hallmark cards)
George – Perth Radiology
Matt – Lang’s building supplies

What is holding virtualization back?

  • 3-tier model is complex to manage
  • Costly to scale
  • Performance bottleneck

Making complex simple – Making Simple Invisible – like Uber
The best infrastructure is that which is invisible
Our mission: Deliver invisible infrastructure.
1750+ customers, 70 countries, 6 continents, 1000+ employees
IDC declares Nutanix has 52% of the hyperconverged market

Nutanix Extreme Computing Platform – Eliminates separate SAN and NAS arrays.

Sachin: So Bob, can you give us some background how you came to Nutanix?
Bob: Been in IT for 30 years and looking for cheaper, faster, better. I’m associated with leading edge IT. Introduced a few years ago to Hyperconverged. Did a Nutanix pilot for SQL, and it went extremely well.

George: I work for an imaging provider in Perth, across multiple sites. About a year ago we were looking for a new solution, our 3 tier solution was getting old and tired. Looked at hyperconverged and it looked simpler and easier to use. Our first workloads moved to Nutanix was SQL and Exchange.

Matt – We are a manufacturer and were introduced to hyperconverged after we missed a prior IT refresh cycle. So we had to do a large refresh, including desktops and infrastructure. We looked at 3 tier, but hyperconverged ticked all the boxes. Hypeconverged is rock solid platform, easy to scale, and we can grow our business as we require.

Sachin: What were some of the things you saw in hyperconverged?

Bob: The value prop of hyperconverged is doing more with less staff, and is very streamlined. Cost is obviously a huge issue, but more importantly the OPEX. Reduced complexity and reduced cost.

Sachin: Could you walk us through some goals?We

Bob: We were having performance issues with our analytics on our 3-tier stack.  We brought in Microsoft BI specialists, and they recommended a highly expensive stack to make the solution worked. So we had to look out of the box, and arrived at hyperconverged and Nutanix.

George: A good friend works for another company, and his better half worked for a storage company. I got honest feedback about storage technologies. We had fiber channel technology that we replaced. We needed something higher density and easier to look after. That’s why we went down the Nutanix path.

Matt: For me efficiency is critical. Less about IT and more about OT (operational technology). Nutanix lets me spend more time on my business and less on IT. For every hour I spend babysitting the infrastructure the less I spend on business. Management wants to see IT as a profit center, not as a cost center. Nutanix has always allowed me to say “yes”. We can deploy apps very quickly, like a dev environment. We can easily expand a cluster in 15 minutes. The hardest part about Nutanix is racking it, seriously. We have to be very keen on costs, and OPEX costs are top of mind. Other vendors could compete on acquisition cost, but they can’t compete on simplicity. We also use it for VDI, and have the NVIDIA grid cards in the NX-7000 series. We have users all over the world using the solution.

Sachin: So an Australian company with global workforce. Josh, let’s get more on the technical details.

Josh: We’ve spent a lot of time talking about business challenges. Business people don’t care about VCDX or SSD drives. This customer had to scale their storage and they could use the Nutanix storage only nodes.

George: We were talking to application owners and they were stating we needed SAN or NAS. But when you are virtualized, it doesn’t matter. We’ve had massive success with Nutanix.

Sachin: You started off with SQL server and BI, but have since expanded your environment.

Bob: We started with the BI stack, and a way to get Nutanix in. I wanted to get Nutanix in the door to show the value of the solution. Our DevOps teams came to me and our C# build times were creeping up to 2 hours. We need a faster solution. So we moved the workload to the Nutanix cluster. Build times reduced to 22 minutes, and that’s real time developers can dedicate to coding instead of waiting on builds to complete. We put that in front of senior leadership, and told them it improved productivity. It enabled me to create a 3 year vision on replacing our entire infrastructure with Nutanix. This will take 3 years to get there, but we will have active/active sites. It allows for better performance, and gives us opportunities to be agile. We understand the concept of DevOps, and want to remove obstacles through technology. This gives more time to sales and DevOps time to do their job.

Sachin: When you went to this new process, what were some of the new features you wanted. DR, cloning, etc.

Bob: We love VMware and what they let us do. But we were looking for a new level of flexibility. But we wanted DR and near-realtime data replication and reduce RPOs to near zero. In addition, and we were really excited about hypervisor choice. We are excited about the possibility of using multiple hypervisors, and even include the Nutanix Acropolis hypervisor. Choice is imperative. If you are an engineer and specialize in storage, your career is limited. Get an engineer to think about thought leadership.

Sachin: So Josh talk about how to access Nutanix. We have PRISM and our APIs.

Josh: I’ve spoken with a lot of service providers and they like PRISM. But they want to automate. Everything you can do in PRISM, you can access via the APIs. This was key for developing our platform.

Sachin: So George, I have a question from twitter. So you have the concept of mixing and matching workloads in your environment. What were your criteria for doing that?

George: Our image storage app need a lot of IOPs, including archival disk requirements.  We have both compute and storage nodes in the same cluster. It’s simple to consume. We didn’t have to hire a storage sysadmin. They can easily consume the cluster resources. Allows rapid development and deployment.

Sachin: Tell us about ongoing management.

Matt: Last year I was flying to VMworld, and I saw wifi on the plane and saw a new NOS update. So I did a cluster update from the air. I’ve never done that with infrastructure before. Maintenance windows are unacceptable. The NOS upgrade didn’t require any downtime. Nutanix and the resilience factor are great. We want the business to always be on, and don’t want to see the business see any performance degradations. Nutanix fails in gracefull manner, and it will self heals. We script everything, and we are to the point where we don’t even provision new users manually. We are very agile and fast. Our ERP is on Nutanix as well. We started with VDI, but that went so well we expanded the workloads. We were able to migrate new workloads in minutes, without months of planning. We get into IT because we like new toys.

Sachin: So Matt, so the common thing we come across are the business angles. This is different and is this what infrastructure should do?

Matt: Absolutely. We want to worry about the business and not the infrastructure.

Bob: IT should not be a cost center, but should be cost neutral. We are looking at scaling out Nutanix as a service, and looking to move other Hallmark subsidiaries to Nutanix. We own Crayola, which is a large business. We started with SQL BI stack, and that was very successful. Our biggest challenge was the sheer volume of apps that we needed to move to Nutanix. What we have now is all the BI and client web app tier are on Nutanix. We do large content management database, and that is based on Mongo running on Nutanix. We also have Oracle 10g running on Nutanix.

Sachin: Josh you are very active in the community. You mentor a lot of VCDX mentoring. What advice would you give to those looking at 3-tier?

Josh: The goal of the VCDX/NPX is all about meeting business requirements. It’s not about technical decision. It’s not about cool toys. Its about mapping business requirements to technical requirements. Keep moving forward and don’t get stale. Don’t bet your job on a single role. We aim to make IT staff more valuable by expanding their skillset.

George: We are looking for our IT staff to be more business oriented. We don’t want our staff just watching blinking lights. We look at what process are in place, and try to fit IT solutions into that framework. We developed a timesheet solution in 6 months, without worrying about the infrastructure.

Audience question: Can you tell us about support and how good it is? Our POC SEs are spot-on.

Matt: Nutanix support is even better than the product. We had a small problem, and called them up. They blew my mind. There aren’t any level 1 or level 2 guys, they are level 3 guys. The guys you get can solve your problems.

Josh: We had a customer that had four hardware failures in a short period. No downtime or data loss. We shipped them new hardware of course. I did the design for them, so I flew to their site at no cost just to put their mind at ease.

Audience question: Can you talk about your old storage arrays, and we have a bunch of storage admins that are nervous about changing platforms.

Bob: We used compellent. We would have had to upgrade controllers and disks. It would have cost us 500K to upgrade, and it was a forklift upgrade. The hyperconverged upgrade only cost 200K. We like the linear scalability. We could use any storage vendor, but don’t like the forklift upgrades and all that cost.

Matt: You can achieve the same outcome using traditional storage, But you don’t do it in the same time, and it will be more complex. We don’t like forklift upgrades. With Nutanix you eliminate forklift upgrades.

Josh: With Nutanix all writes always hit SSD. So your writes are always consistent. Ya these guys ran SQL on Nutanix first, which is somewhat unheard of.

VMworld 2015: DRS Advancements in vSphere 6.0

Session INF5306

DRS is the #1 scheduler in the datacenter today

92% of clusters have DRS enabled. 79% are in fully automated mode. 87% have affinity and anti-affinity rules.

43% of clusters have resource pools enabled and use them

99.8% of cluster use maintenance mode

Bottom line: DRS is popular

DRS collects innumerable stats every 20 seconds for its calculations

  • CPU Reserved
  • Memory reserved
  • CPU active, run and peak
  • memory overhead, growth-rate
  • Active, consumed and idle memory
  • Shared memory pages, balloon, swapped, etc.
  • VM happiness is the most important metric (if demands/entitlementws are always met, then VM is ‘happy’)

Constraints for initial placement and load balancing

  • Constraints are a big part of decision making
  • HA admission control policies
  • Affinity and anti-affinity rules
  • # concurrent vMotions
  • Time to complete vMotion
  • Datastore connectivity
  • vCPU to pCPU ratio
  • Reservations, limits and share settings
  • Agent VMs
  • Special VMs (SMP-FT, vFlash, etc.)

Cost Benefit and minGoodness

  • Cost-benefit analysis – VM happiness is evaluated against the cost of a migration
  • Cost considerations: per vMotion of 30% CPU core for 1Gb and 100% of a core for 10Gb; Memory consumption of ‘shadow VM’ at the destination host
  • Benefit considerations: Positive performance benefit to VMs at the source host, overall workload distribution has to be much better
  • Each analysis results in a rating from -2 to +2
  • MinGoodness (migration threshold slider) is -2 to +2. User can set this.


  • VM happiness is the #1 influence
  • Influenced by real time stats, constraints and cost/benefit analysis
  • A small imbalance should not be a concern
  • Default setting of DRS aggressiveness is best

New Features in vSphere 6.0

  • Network-aware DRS – ability to specify bandwidth reservation for important VMs
  • Initial placement based on VM bandwidth reservation
  • Automatic remediation in response to reservation violations due to pNIC saturation, pNIC failure
  • Tight integration with the vMotion team and will do a unified recommendation for cross-vCenter vMotion
  • Runs a combined DRS and SDRS algorithm to generate a tuple (host, DS)
  • CPU, memory, and network reservations are considered as part of admission control
  • All the constraints are respected as part of the placement
  • VM-to-VM affinity and anti-affinity rules are carried over during cross-cluster and cross-vCenter migration
  • Initial placement enforces the affinity and anti-affinity constraints
  • Improved overhead computation – greatly improves the consolidation during power-on

Cluster Scale and Performance Improvements

  • Increased cluster capacity to 64 hosts and 8K VMs
  • DRS and HA extensively tested at maximum scale for VCSA and Windows
  • Up to 66% performance increase in vCenter (power on, DRS calcs, etc.)
  • VM power-on latency has reduced by 25%
  • vMotion operation is 60% faster
  • Faster host maintenance mode

Extensive Algorithm Usage

  • DRS is the lynchpin of the SDDC vision
  • vSphere HA
  • VUM
  • vCloud Director
  • vCloud Air
  • Fault Tolerance
  • ESX Agent Manager

Best Practices

  • Tip #1: Full storage connectivity
  • Tip #2: Power management settings – Set BIOS to OS control and vSphere to balanced.
  • Tip #3: Threshold setting – Default of 3 works great.
  • Tip #4: Automation level – Fully automated is best choice
  • Tip #5: Beware of resource pool priority inversion. Make sure that cramming more VMs won’t dilute the shares.
  • Tip #6: Avoid setting CPU-affinity

Future Directions

Proactive HA

  • Proactive evacuation of VMs based on hardware health metrics
  • Partnering with hardware vendors to integrate and certify
  • Moderately degraded mode and severely degraded modes
  • VI admin can configure the DRS action for each health state event
  • Host maintenance mode and host quarantine mode
  • VI admin can filter events

Network DRS v2

  • Take pNIC saturation into account
  • Tighter integration with NSX
  • Ensure mice and elephant flow doesn’t share same network path
  • Network layout topology – leverage topology for availability and performance optimizations

Proactive DRS

  • Tighter integration with VRops analytics engine
  • Periodic and seasonality demands incorporated into decision making

What-if Analysis

  • A sandbox tab in UI to run ‘what if’ analysis
  • VM availability assessment by simulating host failures
  • Cluster over commitment during maintenance window

Auto-scale of VMs

  • Horizontal and vertical scaling to maintain end-to-end SLA guarantees
  • Spin-up and spin-down VMs based on workload
  • Will first be offered as a service in vCloud air
  • Increase CPU and memory resources to meet performance goals
  • CPU/memory hot add is an additional option for DB tier

Hybrid DRS

  • Make vCloud-air a seamless extension of enterprise datacenter capacity through policy based scheduling



VMworld 2015: vSphere 6.1 Upgrade & Deployment Pt. 1

Session INF4944

Goal: Deliver and enhanced customer experience for deploying and upgrading vCenter environments.

vCenter server 6.0 platforms: Windows and VCSA support the same scale and performance

Enhanced Linked mode is brand new in 6.0 and supported on Windows or VCSA. Policies and tags are now supported in Linked Mode.

Deployment Models

  • PSC is no longer just SSO, but adds certificates and licensing
  • PSC supports data replication
  • Embedded deployment: PSC and vCenter running on single VM
  • External PSC: vCenter and PSCs on separate VMs
  • vCSA is the recommended deployment package

vCenter Server Install

  • Both Windows and VCSA have similar simplified installs.
  • Supports GUI or scripted installs
  • Simple

vCenter Best Practices

  • Sizing
  • Windows OS and DB compatibility
  • Use FQDN
  • vCSA install target will support vCenter and ESXi in 6.0 U1
  • Time sync is important
  • DNS forward and reverse lookups
  • If using VDS use ephemeral port group
  • Ensure routing works

vCenter Server Upgrade

  • Multi-stage process: SSO/PSC, vCenter, ESXi, VMs, VMFS/VDS
  • Order is important KB2109760
  • Don’t forget about plug-ins, add-ons, VMFS, VDS, etc.
  • Approach upgrades with a holistic view of your infrastructure
  • vCSA upgrade is migration based and required temporary IP
  • Windows vCenter upgrade is in-place

Upgrade Paths

  • Windows Server – From 5.0 on up is supported. Prior to 4.0 you need to upgrade to 5.x.
  • vCSA upgrade from 5.1 later only

Upgrade best Practices

  • Sizing – 6.0 is larger.
  • Windows OS and DB compatibility
  • VCSA Oracle DB deprecation (use embedded DB)
  • Backup DB and VM prior to upgrade
  • Stick to recommended topologies
  • Time sync is very important
  • DB password issues: don’t use dash, question mark, underscore, left paren, equal, exclamation

Repointing from embedded deployment to external PSC – In 6.0 U1

  • First upgrade to 6.0 U1
  • Then deploy external PSC and replicate with embedded PSC
  • Repoint VC to the external PSC

vCSA Management UI (U1)

  • https://vcsa IP/:5480

PSC Management UI (U1)

  • https://PSC IP/psc


VMworld 2015: Future of SDS in 3 years

Session CTO6453

  • Speaker goes over various workloads that vary widely in terms of their requirements (big data, high IOPS, etc.)
  • What do Linux containers need from Storage? Ability to copy and clone root images. Isolated namespaces between containers, QoS controls between containers.
  • Containers and fast clone using shared read-only images
  • Docker’s direction: leverage native copy-on-write filesystem
  • Shared data: containers can share filesystem within host
  • Container storage use cases: unshared volumes, shared volumes, persistent to external storage

Storage Platforms

  • Latencies: object storage – 1s, magnetic storage – 10ms, capacity SSD – 1ms, NVMe SSD – 10us, NVM – 500ns, DRAM 100ns
  • 3D flash is becoming so cheap, 15K drives are fading from the market
  • 3D NAND flash is the ‘capacity flash’
  • NVDIMMs are coming with various types
  • Intel 3D XPoint technology – non-volatile, 1000x higher write endurance than NAND, 1000x faster than NAND, 10x denser than DRAM
  • Flash is separated into two categories: IOPS and capacity

Speaker covers various topologies such as scale-out SDS, hyperconverged SDS (Nutanix), etc.

Storage fabrics: iSCSI (slow), FCoE, and NVMe over Ethernet,  PCIe rack-scale compute

SDS hardware futures – Magnetic storage is being pushed out