VMworld 2014 Wrap up and Thoughts

2014-08-29_10-35-00So VMworld 2014 was my sixth VMworld, if I’m counting correctly and was in many respects the best one to date. It had two firsts for me:  my first VMworld as a VCDX and the first time attending as a VMware partner/vendor employee. VMworld 2013 was big for me, as encouraging comments by people that I greatly admire, plus some sessions presented by VCDXs, gave me the fire shoot for the moon and ultimately succeed. This year built on that, as I had great conversations with peers, customers,  book authors, strangers, blog readers, and industry greats that I truly think the world of. I also got stopped more times than I can count by strangers, who had kind words about my blogging efforts. It was also gratifying to hear that in at least two VM world sessions I got mentioned for my vSphere SSL work and my Toolkit script.

This year, as with past years, I was live blogging nearly all of the sessions that I attended. I have mixed feelings about live blogging sessions, as it’s very hard to capture all of the content and get posts up right after the sessions. When I blog about a topic, I like to add some type of value to the content be it links, commentary, additional tech info, etc. But based on Twitter feedback, the ‘bare bones’ session notes were helpful to those not attending VMworld. Hopefully this post will have a little value add. If you did attend VMworld then the session recordings will be available in a few weeks for your listening pleasure.


This year VMware let out a slow trickle of announcements, and dropped some hints about new features in vSphere 6.0. The biggest announcement during the Day 1 Keynote was EVO: RAIL and a tech 2014-08-29_11-29-12preview of EVO:Rack. The EVO series, which stands for Evolution, is VMware’s recipe for a Hyperconverged platform which several OEMs will be shipping to customers later this year. For a great write-up on EVO:RAIL check out my good friend’s blog, Julian wood, here. Duncan also has a great overview here. I can honestly say that I’m glad VMware has gotten into this space, the same space as Nutanix, as competition in the industry leads to new innovations.  It also validates what Nutanix has been saying for years..the old way of doing IT infrastructure needs to be turned on its head and become web-scale. Fast deployment times, ease of use, and high rack density are hallmarks of the hyperconverged platforms. Nutanix has been shipping since 2011, and as Gartner points out, we are leading the hyperconvered industry vision. For a good overview comparison of Nutanix to EVO:RAIL check out Dwayne’s post here.

Rebranding Abounds

vRealize is the new branding of the vCloud suite, and you can find a VMware blog post about the announcement here. This suite lets you manage your private and hybrid clouds with tighter integration than ever before. Components include business analytics, intelligent operations, automation with control, manage performance and SLAs, hybrid cloud management, and third party extensibility. Said another way this suite includes vCenter Automation Center, vCenter Log Insight, vCenter Operations Management suite, and IT business management suite, standard edition. The suite comes in two editions: Advanced and enterprise.

Also rebranded is vCHS, or vClould Hybrid Service, is now known as vCloud Air. They are also enhancing the “Air” portfolio by adding vRealize Automation as a service in the cloud. For anyone that has installed vCAC, you will be thankful for a pre-installed cloud version of the product given the complexity.

vSphere 6.0 Sneak Peak

For a few months now VMware has had public (but still under NDA) beta of vSphere 6.0. No official release date has been announced, but word on the street at VMworld is sometime in 2015. So I’ll just summarize here features that were disclosed at VMworld and by other bloggers this week. Don’t want to get into any NDA trouble!

  • Long distance vMotion – Increased from 10ms RTT, vSphere 6.0 will now support 100ms RTT. Likely just in enterprise plus SKU. Transcontinental vMotion is now possible.
  • vMotion across switch types – Now you can vMotion between the standard switch, VDS, and any combination thereof. Metadata is copied during the migration process.
  • vMotion across vCenters – With a single click you can now migrate a running VM from one vCenter instance (and network switch) to another vCenter instance. Great for migrations or disaster avoidance combined with long distance vMotion.
  • Multi-vCPU Fault Tolerance – Will now support 4 vCPUs and up to 64GB of RAM
  • vVols – New storage abstraction paradigm and policy based storage management. See session videos below for a deep-dive video.
  • Virtual Datacenter – A new abstraction layer that lets you join multiple vSphere clusters together so you can use the same policies across clusters
  • Improved Web Client – Performance and usability performance
  • Content Library – Centralized repository for VM templates, scripts, and ISOs
  • vCenter Platform Services Controller –  New vCenter core service that handles SSO, licensing, and SSL services
  • New maximum of 6TB of RAM per ESXi host

I was also told that vSphere 6.0 will be more friendly towards trusted SSL certificates and may even have (sit down) certificate wizards built in for vCenter and ESXi hosts. If they do this right, it may obsolete my vSphere Toolkit, which would be great news for customers. VMware said they will get me early bits of the wizard to provide them feedback on. I hope they are doing certificates “right” this time and look forward to seeing what they are working on.

One disappointing news on the vCenter front is the continued lack of support for a high availability SQL configuration. The primary reason given for this lack of support is the required resources to do all of the internal testing to qualify the solution. VMware suggests to contact your account manager and present a business case for why you need high availability SQL support. The more customer pressure, the better the chance it will get supported.

End-User Computing

This was one of the most exciting parts of the day 2 keynote to me. Ben Fathi and Kit Colbert did a great job of announcing and showing off some sneak peaks of great new products. This was reinforced by a session later in the week where VMware went into a good amount of technical details. Under Kit’s leadership this past year VMware has made great strides in end user computing (EUC). They acquired AirWatch, a mobile device management platform. And just last week acquired CloudVolumes, a startup company which can provision hundreds of apps in real time to clients. They also talked about project Fargo, which replaces View Composer as we know it and enables provisioning of VMs in less than 5 seconds. Combining Project Fargo with CloudVolumes let you provision customized VDI VMs in on-demand. This is will reshape how we think about VDI. Citrix are you listening? VMware is taking EUC seriously, which is great for the industry as whole. It will be interesting to see how Citrix responds over the next year.

If you want to check out my review of CloudVolumes from 2013 (which is wicked cool), go here. It *really is* that cool and easy to use. Other VDI layering companies should take notice and gear up for a battle. VMware swooped in and added a EUC crown jewel to its portfolio.

One area that I think Citrix is still ahead in is granular policy based control of the user experience and security settings. For example, you can define rules based on dozens of factors including subnet and access method to tailor the user experience. One such scenario that comes to mind is prohibiting USB devices when connecting remotely but enabling them when used on the LAN. Or easily configuring different LAN and WAN based policies which control the amount of bandwidth and session responsiveness. If VMware can build in a similar policy engine to View, then customers will have two serious EUC solutions on the market to choose from.

VMware NSX 6.1 for vSphere

  • NSX integrates with vSphere 5.5
  • Allows integration with external physical DHCP servers
  • Multiple DHCP servers can be configured
  • Two stage ECMP support
  • L2 VPN (including VLAN trunking) from two different NSX edges between two different (stretched) datacenter.
  • Load balancing improvements: UDP & FTP
  • Seamless integration with F5 firewalls
  • Enhancements to the NSX distributed firewall include: reject action, enhancements to troubleshoot and monitoring


Keynote Videos

Keynote Video Day 1
Keynote Video Day 2

Top 10 Session Recordings

Looks like VMware has uploaded nearly 30 sessions in their “top 10” list. For the full list of sessions and links check this out.

Other Session Recordings

Derek’s Live Blogging Session Notes

VMworld 2014 vExpert Daily Videos


VMUnderGround Opening Acts

These are a great set of videos produced by the vBrownBag team. The panels consist of industry greats like Chris Wahl, Scott Lowe, Michael Webster, Matt Cowger, and many others. Hear what the experts have to say about a variety of topics. You can check out additional content on the Youtube vBrownBag channel here.

Cloud Panel

Networking and SDN

Storage and SDS

Architecture and Infrastructure


Social Media


This year VMware heavily pushed the Software Defined Datacenter concept (SDDC). They are rapidly building up a strong portfolio of products reaching way beyond their core virtualization offerings. Between EVO RAIL, NSX enhancements, major EUC developments, vRealize suite, vCloud Air, and vSphere platform enhancements there’s no question VMware is a force to be reckoned with. More competition in areas such as Hyperconvergence and EUC will help push the industry forward as  whole, and customers will all be rewarded with more innovative products.

As was mentioned in several of the VCDX panels, do NOT run out and implement these new shiny widgets just because they are cool. Resist the non-technical managers that dictate you use a new product without proper business requirement justification. Yes NSX, vRealize Suite, Cloud Volumes, and SDS solutions are “cool” but what problem are you trying to solve? Yes hybrid clouds are “in” but what problem are you really trying to solve? Take a methodical approach to evaluating these new technologies against your real business drivers, to end up with a solution that benefits your end customers.

Thanks again for everyone that came up to me during the event and  said HI. Don’t be afraid to approach your favorite community contributors at conferences or by blog feedback. Every comment really does make a difference. I really appreciate the positive feedback, and will endeavor to blog more this year. I know the last year has been a little light due to a variety of factors. Possibly up for multi-part series is SQL 2014 and Horizon View 6. I will of course do a long vSphere 6.0 install/upgrade series next year whenever the product GAs. It was great seeing everyone and I look forward to VMworld 2015.

VMworld 2014: SDDC VCDX Panel

Session: INF1736

Jon Kohler (Nutanix), Josh Odgers (Nutanix), Matt Cowger (EMC), Scott Lowe (VMware), Jason Nash (Varrow)

This was a very lively session with a panel of five VCDXs from  a variety of companies. I was taking real time notes, so I didn’t capture all of comments and some of the grammar and wording may be a bit awkward. If you attended VMworld, then you can listen to the sessions and get all of the comments and friendly banter among the panelists.

Q: If you are converging multiple datacenters (and multiple vCenters) with NSX in the future, how would you design your datacenter today? What do I do to avoid problems?

A: Scott – NSX manager has a one to one relationship with vCenter today. VMware is actively engaged in fixing this problem. The plans for converging multiple NSX domains into one hasn’t been finalized yet, so can’t answer it. It would be ideal to not having overlapping VDIs.

Q: It used to be with DVS that you couldn’t migrate between vCenters. What is the story with NSX?

A: Scott – If I have a set of logical constructs how do you take that grouping and pick it up and put it into another domain? The answer is that you don’t right now. Not a product feature. There is no solution today. Too early to tell what the real solution will be. Stay tuned for future NSX enhancements.

Q: What are the panel’s thoughts on the datacenter in 5 years? What is the next challenge?

A: Jon – There are always customers that can’t overcome today’s challenges. Maybe extensibility? Federation?

Matt – I’m not confident that’s is the right question to ask. I would hope in 5 years we aren’t talking about hypervisors or storage platforms. We should be talking about how to deploy applications. “I’m over the infrastructure”. I don’t care about OpenStack. I don’t want a VM from OpenStack, I want a VM that is used for my application.

Josh – We focus a lot on infrastructure. We should look towards the application layer. Storage, networking all enable apps. Infrastructure solves challenges that shouldn’t be there. Maybe we won’t need SRM or stretched clusters, with smart apps. The further we get away from infrastructure, the less constrained we will be.

Jason – What we are seeing is a big shift towards software as a service. We have a challenge coming ahead to simplify our ways. Shrinking the datacenters that my customers have today. We have fights ahead about software migrations (e.g. EPIC to something else if hosted in the cloud). It’s about where you host applications. Can you get your data out of the cloud?

Scott – I think we are going to see increasingy a wide adoption of cloud services and hosted application. The ability to migrate data between providers is a big problem. While the tools we use to provide the infrastructure will fade away, the reality is that someone somewhere will have to manage it. If you own some level of infrastructure, we will  need tools that will do mapping and identification across the layers of applications abstraction. Those points will  be relevant regardless of the underlying infrastructure. We will have a large distribution of micro applications, and understanding the dependencies is a huge piece that the industry has not yet come to terms with.

Q: What’s the impact you are seeing about containers? A year ago people didn’t know about containers. People are talking a lot about containers today.

A: Scott – Great question. Right now we still have challenges managing a VM. It is a collection of services (e.g. SSH, web server, DB, etc.). In the Docker world that would be three different containers. Now you have hundreds of VMs. With containers you will mushroom to thousands. We have no tools today to manage them at this scale. Until you can do service discovery, you can’t wire in the app to the rest of the business. How do you tell the containers where to go and who to talk to? If you do DNS, then that uses a lot of IP address. This is a challenge is that the vast majority of companies won’t be using containers. Only large web scale companies like twitter will be using it. Today the maturity is not there today.

Jason – Customers are thinking about containers, but aren’t changing their app model today.

Jon – People dive in head first to containers, but they still haven’t gotten down pat managing VMs.

Josh – Don’t use technology just for the sake of technology. If what you are using today is working, then don’t change what you are doing.

Matt – Docker is not a container. Docker is an orchestration method for various kinds of containers. The reason why Docker matters is that they figured out how to solve prolems like service binding, etc. Docker fixed a lot of that. I want to make that distinction. Containers are only now relevant because the tools to manage them at scale are now relevant.

Q: You shouldn’t jump into something new just because it sounds good. I have several IT managers that do just that. We get overruled every day. How can I prevent that?

A: Scott – We can all agree layer 8 has a lot of problems that need to be addressed. We as architects need to make sure business requirements map to technology. IT exists to serve the business. Are we decreasing time to market, increasing revenue, etc. If we aren’t doing that then why are here.

Matt – Names a product that is stupidly cool but super expensive (Xsigo). Matt then tries to quantify the amount of time saved, and ho much money it would save. They then bought the tool (which was later bought by Oracle). As a VCDX you need to match business requirements to technology, not the other way around.

Jason – I’m doing a lot of roadshows for NSX and all flash arrays because they are cool new widgets. But you find way higher attach success by defining requirements and doing ROI analysis.

Q: As you look at the IT landscape, will the 20% of people running Solaris, HP-UX, AS/400. Is this going to be a hurdle and what’s the way forward?

A: Josh – This is the same process of virtualization 10 years ago where tier-1 apps would not be virtualized. VMware can do more than 80% of the task. Today it’s more a political challenge then technical. Michael Webster gets on stage – The most issues are not technical in nature. You can virtualize VAX and Alpha today.

Scott – It’s all about a business requirement.. If these new technologies don’t apply to your technology, then it’s not worth trying to fit a round peg in a square hole.

Matt – SDDC is not all about vSphere. You can implement SDDC without using vSphere.

Jon – If your biz requirement is people are 150 years old and you are using LPARS….ok that’s not funny.

Jason – Or do it for the 80% that VMware can do, and leave those other technologies alone.

Michael Webster – Many Unix platforms can be easily migrated to vSphere, even DB2 running from a mainframe.

Q: I lead a performance and management team. I’m afraid people will be pointing finger at me. What do you think is an approach that might work? App discovery, performance baselines, etc.

A: Matt – You job should be identifying performance issues and pushing that down to the app owners. You should make sure the environment is up and meets SLAs. Give the data to the app owners to manage.

Scott – I agree. Mange the expectations by SLAs. Did we violate the SLAs? The app owner can then drill down into the problems.

Matt – Manage SLAs around latency, bandwidth, CPU utilization, etc. Josh  – the goal is to find the problem.

Scott – I agree. The app folks will say TPS are running low, and they are asking you why. You do need to write the SLAs over what you have control with and a clear boundary. You need application metrics. Mutually agree at these SLAs.

Matt – Baselining is hugely important.

Jon – Baseline is super important. Get it in writing.

Josh – Manage the expectation so they don’t try and railroad you.

Q: I work in Federal. While we don’t have a public cloud that is approved. What can I do today when in the future the public cloud is approved?

A: Jason – Why do you want a hybrid cloud model? Will you be saving money, cloud bursting, etc?

Josh – There’s a perception that hybrid cloud is good. But the grass is not always greener on the otherwise. It’s about delivering a business requirement.

Matt – It’s not uncommon to say one thing because they think that’s what will get them what they want. But it could be because they want to go around IT.

Jason – Some people see shadow IT as an opportunity to improve. I get asked all the time what do you need to do to move to a hybrid cloud platform. Often the answer is better serving the customers you have now, better. This is better than just swiping your AWS credit card.

Jon – What can help when collecting business requirements that look good, is asking do you really need it super fast? Maybe they are unhappy with your existing service catalog.

Matt – Make sure you run the numbers. One of them will be cheaper, but you need to find out which it is.

Jason – Choose your internal platforms carefully, so you can better move to a hybrid platform in the future.

Jon – Ask the customer what they expect from a hybrid cloud. ROI of build vs buy.

A: There’s a lot of change in how we manage datacenters. What do you guys see as the changing role of an administrator in this new role?

Q: Scott – We were talking about networking at OpeningActs on Sunday. One of the comments was that you have three tiers of people IT. One tier that racks and stacks. Then the middle tier is like middle management, where the sysadmins fall. The third tier are architects. The middle layer will get eliminated. To add value you have to look beyond managing the widgets in your silo. You will need to be aware of business costs, how to manage, etc. and that will keep you relevant much longer. Don’t focus on specific technologies.

Matt – if you are retiring in 5 years, do nothing you are fine unless you want to. In 10 years you need to figure out things like hypererconverged, containers and NSX. For the next 10-20 years, you need to learn to write code. Through automation.

Scott – Not everyone will be a programmer. But you need to be able to use infrastructure as a code tool.

Matt – I am not a networking guy. I can’t route myself out of a paper bag. But I can pull up wireshark and know what’s happening. But I do know enough to poke and prod a little bit.

Jason – Trying to get people out of the mindest of just delivering their own widgets. Projects that use to take a month now take two weeks, with solutions like Nutanix or other systems that are easy to deploy. Integration with other systems is important.

Jon – It’s not about if you can read wireshark. It’s about how you can apply technology to solve a problem. I can solve a business problem with ‘that’. Until people break out of the silos, then they won’t understand what’s happening in the datacenter. You will need to look at the macro picture.

Josh – Break out of the silos.

VMworld 2014: Re-imaging VDI for Next-Gen Desktops

Session EUC2551

VDI 1.0 challenges: Storage cost and performance; 3D Graphics; Application management; Provisioning: All adds up to a frustrating experience

Broad Goals for Next Gen VDI

  • Efficient use of infrastructure – just in time desktops w/ zero copy architecture – Building at the point of demand
  • Improved flexibility- user installed apps – Dynamically assemble of JIT desktops
  • Best user experience – Low latency I/O – advanced vSAN and RAM desktops
  • Simplified implementation – adaptive designs, easy deployment

Next-Gen Solutions

  • vSAN storage: simple, affordable
  • 3D Graphics: SVGA shared GPU and DGPU for power users
  • Application management: Layering for real-time desktops – fast & flexible
  • Provisioning: Rapid hot cloning of VMs – 10x improvements (Project Fargo)
  • Decision free VDI: Simpler path to success than ever before

New Desktop Opportunity

  • VMware is poised to disrupt assumptions about VDI
  • Great user experience
  • Easier to manage
  • Build VMs on demand
  • Layering with CloudVolumes


  • Reliable, scalable performance
  • Lowest cost in the industry
  • Planning is easy
  • Uptime
  • Autonomy

Next-Gen VDI: Leaner, greener and much faster

  • Protect only what matters
  • Consider file-sync service for user documents
  • Leveraging new flash technologies: UltraDIMM (flash on DIMM) for microsecond writes

VSAN Design Topology for VDI

  • Full PC in the datacenter – Inefficient design yet affordable with VSAN – Developers and some users
  • Non-persistent task worker – Composer based clones – replication optional. Limited flexibility
  • Non-persistent knowledge worker – Layering plus VSAN – Full user state preserved, no replication of OS VMDK

VSAN Design – Non-persistent designs – gives persistent experience with a non-persistent VDI

VSAN Futures

  • All flash VSAN designs – High grade enterprise flash for cache tier, but high capacity low cost TLC SSD for data tier
  • Cache tier: $4/GB data tier flash in 40c/GB range
  • Key technologies: SanDisk ultraDIMM,

Layering 101:

  • Types of layering: Offline composition (Mirage) vs. Real Time (CloudVolumes)
  • Real-time composition leverages vSphere VMDK to mount and near instant application insertions
  • The OS doesn’t realize the CloudVolumes are even mounted. It is very transparent.

CloudVolumes Key benefits

  • Live delivery of applications
  • Image diversity solved
  • Exponential infrastructure efficiency

Project Fargo

  • Uses a VM hot cloning technology to create child VMs that share all memory pages w/ parent. Think linked clones for memory and disk.
  • A running VM is put into a zombie state and hot-cloned to create replicas. Avoid the whole boot cycle, including the associated I/Os.
  • Pre-emptive memory sharing, lower CPU, I/O reduction
  • 30x improvement in provisioning

Just in time desktop revolution

  • Build to order VDI
  • Very little CPU and modest disk IO to fork a new VM
  • Operational flexibility
  • Combine Project Fargo and CloudVolumes for near instant VDI provisioning

Zero Copy Architecture

  • Transfer bits on demand, no more tax before consumption
  • Examples are Project Fargo and CloudVolumes

Project Meteor

  • Brining together Project Fargo and CloudVolumes
  • Provision new customized desktops in 5 seconds
  • This is VDI simplified

VMworld 2014: DISA STIG vSphere 5 Deep Dive

Session INF1273

This was a very technical session on how to implement the DISA STIG’s (security lockdowns) for DoD/Government customers. Many of the slides contained script snippets that help automate the process. Thus my session notes are very light. If you are a U.S. Government Federal customer that must comply with the STIG’s, then look at the reference slide I have below. The speaker’s automated scripts and VIBs are located on a CAC-only web site for you to download. If you attended VMworld, then listen to this session and gain some insights on issues the authors found and how to overcome them.

STIGS are broken up into three area: hosts, VMs and vCenter

Checking VM settings with PowerCLI: Easiest report to create since it relies most only VMX settings

Checking ESXi settings with Power CLI: Most host STIG controls cannot be queried via exposed APIs. Shows a script that uses Plink and PowerShell to query settings.

Checking vCenter controls with PowerCLI: Very manual process.

ESXi host hardening requires changing of permanent files or adding new files. They will be non-persistent and disappear upon reboot.

ESXi5-CPT: Graphical tool to create VIBs that can replace files on ESXi hosts.

Use ‘ESXcli vib install -d <path> -no-sig-check’ to install the custom VIB or PowerCLI

Additional tools: vCenter Configuration manager (vCM), Nessus scanner, VMware compliance checker, DoD Forge.mil project







VMworld 2014: Ask the expert vBloggers v2

Panel: William Lam, Duncan Epping, Chad Sakac, Scott Lowe

Moderator: Rick Scherer

This was a pure question driven session, and I tried to capture the questions and response as best I could. Please forgive any grammatical errors or typos.

Q: Is there any capability to address any external storage?

A: Duncan – EVO RAIL will allow you to scale out and up to 16 nodes, so the capacity will go up. VSAN is included in the box. The RAIL appliance can connect to existing external storage, just as you would with any ESXi host.

Q: Can you add a non-EVO RAIL host to the cluster?

A: Duncan – It just uses VSAN, so technically you could. However, the big selling point is ease of management and you can’t use the EVO GUI. So keep cluster members homogeneous. Chad – It’s totally technically possible, but the EVO manager may “freak out” a bit and you will also have OEM support agreement issues. Support system would break.

Q: As a former vBlock architect, what is the messaging regarding EVO RACK regarding vBLOCK.

A: Chad – SAP, Oracle and replicate with lots of features, then use VBLOCK. If you are deploying IT apps without expecting those vBLOCK services, then use EVO RACK. Duncan – EVO RACK is basically a reference architecture. VCE might be interested in taking the EVO logic and using their hardware.

Q: A customer changed a VM to use 30 ‘sockets’ vice changing the cores setting. In the monster sizing preso VMware said to change the socket count vice the core number to fit in the NUMA node.

A: Duncan – The recommendations in that session are generic. There are many combinations of settings for VMs. There’s a lot of licensing aspects to consider as well. Monitor ESX top to see what the RDY time is. The generic recommendations don’t always apply.

Q: From a VM configuration perspective is there a scheduling difference between specifying cores and sockets?

A: Chad/Duncan – There are scheduler differences between cores and sockets. From a performance point of view if you cross NUMA nodes then you may get a memory performance impact. So best to size VMs to fit within a NUMA node. Chad – Crossing the NUMA node could reduce memory performance a lot.

Q: Hyperconverged looks like a big threat to infrastructure vendors like switches and storage arrays.

A: Chad – You can never win playing a defensive game. People who view fundamental technology disruption, like SDS,  need to realize that architecture changes and some sales will cannibalize. Duncan – It opens up a lot of opportunities like networking. Some solutions don’t include networking today or even have NSX, so it opens more opportunities than it closes. EMC doesn’t worry about it, except for maybe the sales people. Scott – There are plenty of opportunities for EVO RAIL vendors to add value add, so it’s not really that big of a threat to most vendors. Rick – From a customer perspective this drives innovation and pushes other hyperconvered companies to innovate even more. It’s a win win for the customer. It’s the OEM’s battle to lose.

Q: What do you guys use for your home labs?

A: Chad – Describes a huge home lab with like 300TB of storage, and he had to put in a 30A circuit to power his equipment. Instead of a raise EMC pays Chad’s power bill. Now his lab, based on SSDs, only draws 10A. Describe his Ivy Bridge lab specs. Chad now also uses vClould AIR a lot, to learn about OpenStack and other technologies. William – Use a Mac Mini, which is light on power. Or use hands-on-labs, or use vCloud AIR. There are also good providers that let you stand up VMs or use bare metal. Duncan – Duncan acquired a bunch of EVO RAIL servers. He also uses VMware Workstation, which is great to grab screenshots or just learn. Scott – Describes an OpenStack suite that he uses. Storage is provided by Synology. Rick – He feels that work should stay at work. He also says mini ITX cases can be really great for a home lab. Healthy work life balance is key. Chad – The hosted cloud labs are awesome. If you are an EMC customer then talk to your SE about getting. Portal.demoemc.com.

Q: As VMware Cloud Air tries to build out, how do you talk to partners that use VSPP. What happens to other service providers?

A:  Scott – The vCloud Air team is bringing in partners, so that could expand the network. Chad – Says if he was a provider then he would join the vCloud Air network. VSPP’s usually do something special that adds additional value like an industry vertical. Scott – vCloud Air is built to address a general, broad market. There’s always a space for VSPP partners to innovate, such as higher SLAs, or offer unique software. Chad – Once saw a service provider that specialized in clinical research. They are going like crazy, and is an example of going crazy. This is a use case that vCloud air is not going to address

Q: Seems like EVO RAIL is making virtualizing easy. But firmware management can be a pain.

A: Duncan – We do realize this is a problem, but the current version does not address this. All EVO partners handle firmware update differently. So they are working on an abstraction layer that will enable customers to update firmware across the EVO line. EVO RACK is also aiming to solve this problem too. Chad – Chimes in that EMC and VMware are working on the problem. William – Some vendors do have VIBs and other drivers to help automate the firmware deployment process. Chad – IPMI is great, but every vendor does it differently. An abstraction layer to across IPMI, drivers, firmware, etc. is in the works.

Q: We use EMC storage with auto-tiered and the SAP team complains about latency. IOPS goes up and latency is up. Is the root cause the auto tiering itself or something else?

A: Chad – No technology is a panacea. All ILMs will have a delayed response to load. vCOPS has great metrics that can help you find the root cause. This is where all flash arrays have an advantage.

Q: NSX question. Nexus core/edge with Cisco UCS. The value prop of NSX is compelling. How quickly will load balancing and firewall appliances like ASA and F5 be replaced by NSX?

A: Scott – This week we announced the availability of NSX 6.1, for pure vSphere environments. One included feature is integration with F5 load balancers. There is no work currently with Cisco and the ASAs to integrate.  NSX doesn’t attempt to address also use cases for load balancers and firewalls, but won’t address all features. So displacing hardware appliances will depend on what features you are using. Chad – Often hardware appliance are used in generic applications and don’t use proprietary features that NSX won’t have. NSX will continue to innovate and add new features.

Q: I’m a storage admin, for about 10 months. Now he has 1000 servers, has NetApp, EMC, Dell Equalogic. Can you give me any ideas for a new storage administrator? University environment. He describes an environment that is in a very unhealthy state.

A: Duncan – Go VSAN. Chad – There’s this thing called LinkedIn to find a new job. Just kidding. Keep firmware up to date. Automate. When you find yourself logging into the UIs, you are doing it wrong. Standardize. Standardize. Standardize. Stick with 2-3 core vendors. William – Use vCO to help automate the storage tasks. You can create workflows for different operational requirements. You can create runbooks. Chad – You could also use EMC VIPR, which is free with community support. VIPR is a harmonizer for all storage “Crap”. Rick – Seriously, VIPR controller is a good option.

Q: If you are looking to write to APIs, and don’t want to use VIPR controller, how would you do it? What kind of SDK platform should I use?

A:  William – Depends on where you want to standardize. You could choose OpenStack or vSphere APIs. There’s no magic answer to bridge everything. Scott – There’s no magic rosetta stone. Chad – VIPR controller makes EMC no money. It was written to help bridge the storage gaps. There is no answer to this problem that exists. William – This is a hard problem to solve. You also need to understand what it means, and maintaining the interface is hard. Chad – 200 engineering folks worked on creating the VIPR controller.

Q: Can you please name one product or feature announced at VMworld that makes the biggest impact in the industry? Name some new product or vendor that you also like.

A: Scott – VMware Integrated OpenStack is huge. Pay close attention with Pivotal and Docker around containers. Duncan – EVO RAIL is the clear winner. Look at Thousand Eyes for networking monitoring. Duncan things this is a really cool solution. William – EVO RAIL, since this is a different consumption model. The Google partnership is also good. Chad – Cloud Air is a big one for Chad. Company to watch is Node Prime. Rick – Project Fargo is amazing.

Q: I’m looking for some clarification and need some help in setting the IOPS for round robin (VMAX).

A: Chad – Asks what array you should be using. Use IOPS set to 1 across the board for VMAX.

Q: Question around vVols.

A: Chad – VMware has submitted the vVol spec to the T10 standard’s body. So other vendors may support it. Duncan – VMware is actively trying to help the open source community, and drive industry standards. But it will take time. Scott – VMware is pushing for enablements across the board.


VMworld 2014: vSphere HA Best Practices and FT Preview

Session BCO2701. This was very fast paced, and I missed jotting down a lot of the slide content. If you attended VMworld then I recommend you listen to the recording to get all of the good bits of information.

vSphere HA – what’s new in 5.5

  • VSAN Support
  • AppHA integration

What is HA? Protects against 3 failures:

  • Host failures, VM crashes
  • host network isolated and datastore incurs PDL
  • Guest OS hangs/crashes and application hangs/crashes

Best Practices for Networking and Storage

  • Redundant HA network
  • Fewest possible hops
  • Consistent portgroup names and network labels
  • Route based on originating port ID
  • Failback policy = no
  • Enable PortFast
  • MTU size the same

Networking Recommendations

  • Disable host monitoring if network maintenanceis going on
  • vmkinics for vSphere HA on separate subnets
  • Specify additional network isolation addresses
  • Each host can communicate with all other hosts

Storage Recommendations

  • Storage Heartbaeats – All hosts in the cluster should see the same datastores

Best Practices for HA and VSAN

  • Heartbeat datastores are not necessary in a VSAN cluster
  • Add a non-VSAN datastore to cluster hosts if VM MAC address collisions on the VM network are a significant concern
  • Choose a datastore that is fault isolated from VSAN network
  • Isolation address – use the default gateways for the VSAN networks
  • Each VSAN network should be on a unique subnet

vSphere HA Admission Control

  • Select the appropriate admission control policy
  • Enable DRS to maximize likelihood that Vm resource demands are met
  • Simulate failures to test and assess performance
  • Use the impact assessment fling
  • Percentage based is often the best choice but need to recalculate when hosts are added/removed

Tech Previews of  FT

  • FT will support up to v 4CPUs and 64GB of RAM per VM
  • FT now uses separate storage for the primary and secondary VMs
  • New FT method does not keep CPUs in lock step, but relies on super fast check pointing

Tech Preview HA

  • VM Component protection for storage is a new feature
  • Problem: Detects APD and PDL situation
  • Solution: Restarts affected VMs on unaffected hosts
  • Shows a GUI with options for what you want to protect against

Tech Preview of Admission control fling

  • Assesses the impact of losing a host
  • Provides sample scenarios to simulate


VMworld 2014: vCenter Server Architecture Deep Dive

Session INF2311, Justin King

vCenter Server Configuration Options: 5.0 –

Configuration Option for v5.5 #1:

  • Use simple installer
  • Multiple vCenters for different geographic locations
  • Single SSO authentication domain

Configuration for 5.5 #2:

  • Centralized SSO on dedicated VM
  • A datacenter with 3 or more solutions (e.g. vCAC, etc.)
  • Availability uses vSphere HA and network load balancer

Utilize A management cluster

  • Run multiple vCenter components together on same virtual machine minus the database
  • Recommendations: 3 vSphere Hosts, enable vSphere HA

vCenter SSO recommendations

  • Embedded vCenter SSO reduces complexity
  • Up to 8 instances
  • 12ms latency
  • Same vSphere.local domain
  • Centralized SSO-only VMs (3 or more solution like vCenter, vCAC, etc.)
  • All configurations: Backup each instance

vSphere client in vSphere 5.5 Update 2 supports HW v10. Yippee!

vCenter Server Tech Preview

  • VMware Platform services controller is now known as a “platform services controller” in addition to SSO. Certificates, licensing, etc. will register with the platform controller.
  • You can embed the PSC (platform service controller) with vCenter, just like you did with SSO
  • Think of the PSC as the new SSO, but with a lot more services
  • The existing SSO topologies (embedded or external) are valid for the SSO
  • You can mix and match PSC embedded and external instances, all sharing the same SSO domain

vSphere 6.0 Tech Preview Install and Upgrade

  • One installer that allows you to choose the deployment type (embedded or external PSC)
  • Asks for all input up front, validate, then will deploy the software
  • Scripted install for advanced users
  • Linux appliance install is completely new with a guided install with pre-check installer
  • Full upgrade path from 5.0, 5.1 and 5.5


  • Appliance model is now on par with Windows in terms of number off VMs and ESXi hosts
  • Linked mode drops ADAM and will be supported on the Linux appliance


VMworld 2014: What’s new in SRM & vSphere Replication

Session BCO2629

Software defined storage and availability

  • Bringing the efficient operational model of virtualization to storage
  • Common policy based model
  • VM centric data services
  • Abstraction and pooling of infrastructure

vCenter Site Recovery Manager

  • Industry-leading disaster recovery automation solution for vSphere environments
  • Uses array-based replication
  • Use cases include DR, disaster avoidance, planned migration
  • Recovery workflows: Failover automation, non-disruptive failover testing, planned migration, failback automation

What’s new in SRM 5.8?

vCAC Integration

  • vCAC workflow support – Self-service policy-based DR protection for Apps and other workflows
  • VCAC management across both sites. Integration with vCO plugin for SRM
  • New PowerCLI APIs exposed
  • Automated protection mapping according to pre-defined tiers via vCAC
  • DR control delivered as a service to app tenants via vCAC
  • VCO plug-in for SRM offers many workflows like creating protection groups and add VMs. Find protection groups by datastores, etc.


  • 5,000 protected VMs, vice 1,500 in 5.5
  • 2,000 VMs concurrent recovery, vice 1,000 in 5.5
  • Many performance across the board – UP to 75% faster
  • IP customization is a huge time sink, so if you avoid that then recovery will be even faster

Ease of Use

  • Built-in vPostgress database, or can still use external DB
  • Integrated into the vSphere web client

IP Subnet Mapping

  • Can use the dr-ip-customizer – Old method is still there
  • Can now use rules to define customization as part of the network mapping, to preserve parts of the IP address while mapping to the new subnet
  • Based on subnet mask

VSAN + vSphere Replication and Site Recovery Manage: VSAN is compatible with vSphere replication

vSphere Replication  (VR)

  • Included in vSphere essentials plus and higher
  • Protects up to 500 VMs
  • Replication happens at the host level
  • Easy virtual appliance deployment
  • Quick recovery for individual VMs
  • Replication engine for SRM
  • Replicate workloads to vCenter Serer and vCloud Air
  • VR components: vSphere replication management server (VRMS), vSphere Replication Server (VRS), vSphere Replication Agent (VRA)
  • Supports VSS quiescing – Only use if you need to

What’s new in 5.8

  • Replicate to the cloud – vCloud Air
  • vSphere Replication reporting
  • vSphere replication calculator – Allows you to solve for network bandwidth, how many VMs you can protect, and other features
  • Capacity planning appliance fling



VMworld 2014: Software Defined Storage the VCDX way, Part II

Session STO2480, Rawlinson Rivera, Wade Holmes

This was a great session by the dual which covered VMware storage technology, the VCDX way. The session was a continuation of Part 1, which was at VMworld 2013. As a side note, the great Part 1 session was one of the factors figuring into my decision to go for my VCDX. I thought, “Hey I can do that too.” So buckle up and see what these two world class VCDXs have to say.

  • The VCDX Way: Methodology to enable efficient technology solution design, implementation and adoption, meeting YOUR business requirements.
  • Business requirements drive solution architecture: Business requirements -> Solution Architecture -> Engineering specifications
  • Areas: Availability, manageability, performance, recoverability, security

Increasing diversity of devices

  • Hot edge: CPU/memory bound, low latency, dominated by flash
  • Cold core: capacity-centric, increasing commodity hardware

Storage Policy Based Management Solution Impact

  • Availability
  • Manageability
  • Performance
  • Recoverability
  • Security
  • All these areas need policy-driven, vm-centric, virtualized data plane

Choice in building a virtual SAN based solution

  • Component based – Choose individual components
  • Virtual SAN ready node – 40 OEM validated server configurations
  • VMware EVO RAIL: Hyper-converged infrastructure

Build your own virtual SAN Node

  • Any server on the VMware HCL
  • At least one SSD/PCIe device and one HDD
  • 1Gb/10Gb NIC – 10Gb preferred
  • SAS/SATA controllers – Queue depth is key. RAID 0 and pass-through, with pass-through being preferred. Pass-though allows VSAN handle hot-plug events.
  • 4GB to 16GB USB/SD flash hard or HDD
  • Each disk group has a maximum of 7 disks
  • Size working set to fit in the flash tier, if performance is key

Network design – Old design is more core/aggregation/access.  New architecture is spine/leaf.

VSAN as a Platform: Ready nodes with T10 DIF support and encryption

Legacy Operational Model creates several challenges

  • SAN, NAS and all-flash. There’s a chain from the VM down to the storage array.
  • Storage controller challenges: lengthy provisioning cycles, difficult to make changes, etc.

Storage management with SDS: VASA 1.5 and higher enable VM agility and much faster provisioning

  • vVols (VASA 2.0)
  • VSAN (VASA 1.5)
  • System labels (VASA 1.0) in vSpere 5.0

VMware software-defined storage for external storage

  • Storage policy
  • Enables a method to consume storage via policy while making it more secure and agile
  • Policy driven, VM-centric control plane
  • Virtual volumes is the primary way to enable storage policy with third party arrays
  • No more LUNs/Volumes
  • Enables you to publish capabilities like snapshots, replication, deduplication and QoS
  • All about ease of management
  • vVols will change everything!! Simplicity, with a policy based framework

Virtual Volume Architecture

  • Control path uses the vendor provided VASA provider – Out of band
  • Data path remains the same as before (FC, iSCSI, etc.)
  • Storage admin can now define storage properties
  • There is no more VMFS !

Solving Storage Provider Challenges through integeration with partner solutions

  • vSphere API for IO filtering (PernixData)
  • Software driven data services driven by third parties

Enabling self-service comsumption

  • vCloud Automation center integration

Key takeaways

  • Start from the top down
  • Vmware enables software defined storage
  • SDS can enable efficiencies
  • SDS can provide both CAPEX and OPEX as your datacenter grows







VMworld 2014: Next-Gen Storage Best Practices

Session STO2496 with Rawlingson Rivera (VMware), Chad Sakac (EMC), and Vaughn Stewart (Pure Storage)

Simplicity is the key to success:

  • Use large datastores
  • Limit use of RDMs
  • Use datastore clusters
  • Use array automated tiering storage
  • Avoid jumbo frames for iSCSI and NFS
  • KISS gets you out of jail a lot
  • Use VAAI
  • Use plug-able storage architecture

Key Best Storage Best Practice Documents – Use your vendor’s docs first. VMware docs are just general guidelines.

Hybrid arrays – Use some degree of flash and HDD. Examples are Nimble, Tintri, VNXe, etc.

Host caches such as PernixData, VFRC, SanDisk.

Converged Infrastructure such as Nutanix and Simplivity.

All flash arrays such as SollidFire, ExtremIO, Pure Storage

Data has active I/O bands – the working set size. Applications tend to overwrite the most 15% of data.

Benchmark Principles

  • Don’t let vendors steer you too much – Best thing is to talk to different customers
  • Good benchmarking is NOT easy
  • You need to benchmark over time
  • Use mixed loads with lots of hosts and VMs
  • Can use slob or IOMETER and configure to set different IO sizes
  • Don’t use an old version of IOMETER. A new version was released about six weeks ago
  • Generating more than 20K IOPS out of one workload generator is hard. If you want 200K IOPS, you will likely need 20 workers

Architecture Matters

  • Always benchmark under normal operations, near system capacity, during system failures
  • Always benchmark resiliency, availability and data management features
  • Recommend testing with actual data
  • Dedupe can actually increase performance by discarding duplicate data
  • Sometimes all flash array vendors will suggest increasing queue depth to 256, over the default of 32
  • Queues are everywhere

Storage Networking Guidance

  • VMFS and NFS provide similar performance
  • Always separate guest VM traffic from storage VMkernel network
  • Recommendation is to NOT use jumbo frames – 0 to 10% performance gain with it turned on
  • YMMV

Thin provisioning is not a data reduction technology

Data reduction technologies are the new norm: Dedupe block sizes change (512b Pure, 16KB 3PAR, 4KB NetApp)

  • There is a big operational difference between inline and post-process

Data reduction in virtual disks (better in Hyper-V than vSphere 5.x): T10 UNMAP is still a manual process in vSphere

Quality of Service

  • In general QoS does not ensure performance during storage maintenance or failures
  • Many enterprise customers can’t operationalize QoS and do better with All flash arrays
  • QoS can be important capability in some storage processor use cases
  • With vVols there may be a huge uptick in QoS usage

VMware Virtual SAN Integration

  • Unparalleled level of integration with vSphere
  • Enterprise features: NIOC, vMotion, SvMotion, DRS, HA
  • Data protection: Linked clones, snapshots, VDP advanced
  • Policy based management driven architecture
  • VSAN best practices: 10Gb NICs, Use VDS, NIOC, queue depth of 256, don’t mix disk types in a cluster
  • Uses L2 multicast
  • 10% of the total capacity should be in the flash tier


  • Automate everything that you can
  • Do not hard code to any vendor’s specific API
  • Do not get hung up on Puppet, Chef, vs. Ruby, etc.