Ignite 2015: Platform Vision: Storage

Session: BRK2485

What is SDS?

  • Cloud-inspired infrastructure and design – Commodity HW, cost efficiencies
  • Evolving technologies – Flash, network performance, VMs and containers
  • Data explosion – Device proliferation, modern apps
  • Scale out with simplicity – Integrated solutions, rapid time to solution, policy-based management

Storage Customer choice: Private cloud with traditional storage (SAN/NAS), private cloud with Microsoft SDS (Azure stack storage), hybrid cloud storage (StorSimple with Azure storage), public cloud storage (Azure storage) Where are we today in the SDS journey? -Goes into details about WS2012 R2 solutions, including those from Dell. What’s in Technical Preview 2 of WS2016?

  • Reliability – Cross-site availability and DR, improved tolerance to transient failures
  • Scalability – Manage noisy neighbors and demand surges, deploy mixed workloads in shared environment
  • Manageability – Easier migration to the new OS version
  • Reduced cost – More cost-effective by using volume HW, use SATA and NVMe in addition to SAS

Storage QoS

  • Simple out of box behavior – Enabled by default on scale out file server, automatic metrics per VHD, VM, host, volume; includes normalized IOPs and latency
  • Flexible and customizable policies – Policy per VM, VHD, service or tenant; fair distribution within policy
  • Management – System Center VMM and Ops Manager; PowerShell built-in for Hyper-V and SoFS

Rolling Upgrades

  • Simple – Rolling upgrade within cluster
  • Seamless – Zero downtime for Hyper-V and scale-out file server

VM Storage resiliency: reliability

  • Resiliency – Freezes VMs if storage path is lost
  • Visibility –

Storage Replica

  • Protection of key data and workloads
  • Synchronous replication – Storage agnostic mirroring with crash-consistent volumes
  • Increase resilience – Metro distance clustering
  • Complete solution – End-to-end for storage and clustering
  • Streamlined management – GUI management for nodes and clusters

Storage Spaces Direct

  • Cloud Design points and management – Standard servers with LOCAL storage, supports NVMe
  • Reliability, scalability, flexibility – Fault tolerance to disk, enclosure and node failures
  • Use cases – Hyper-V IaaS storage, hyperconverged
  • Partners: HP Apollo 2000, Quanta D51PH, Lenovo x3650, Dell, Cisco
  • Uses ReFS as the underlying filesystem

Azure-consistent storage (future preview)

  • Consistent – Azure consistent blob, table, and account management for on prem
  • Integrated – Deployed as Microsoft Azure stack cloud services
  • Manageable – Azure cmdlets, APIs and templates
  • Scalable – PaaS services as front-end, high-performance scale-out IaaS via internal SMB

Demo shows 100Gb NIC and NVMe Direct Storage with <1ms latency and 11GB (bytes) throughput over 1 network port. WOW! They literally saturated a 100Gb NIC with SMB traffic. Approximately 20% CPU utilization.

 

 

Nutanix and Veeam HyperV Best Practices

Earlier this year I had the distinct pleasure of working with Luca Dell’oca (@Dellock6) from Veeam on a Nutanix + Veeam Backup and Replication + VMware vSphere whitepaper. You can check out that post and whitepaper here. Now, just a few months later, we’ve collaborated on a Nutanix + Veeam + Hyper-V 2012 R2 backup whitepaper. The new whitepaper is available here.

The goal of these two joint whitepapers are to enable our mutual customers deploy Veeam Backup and Replication 7 on Nutanix, when used with the two leading virtualization platforms. Both whitepapers are approximately 20 pages, and go into a lot of great detail. We tested both solutions in the lab, to ensure what we are recommending works in the real world. This is not high level marketing fluff, folks. No fluff zone. We detail the best practices for using Nutanix SMB 3.0 shares with Hyper-V 2012 R2 and Veeam Backup and Replication 7.0.

Veeam is a very popular backup solution, which now has in excess of 101,000 global customers. They are also a sponsor of my blog. The web-scale Nutanix solution and support of the Hyper-V 2012 R2 VSS platform compliment the Veeam Backup and Replication product, to provide a robust backup and restore solution. This allows you to meet your RPO and RTO requirements, in a fully supported and efficient manner. I’ve always been impressed with how easy Veeam is to configure, compared to some of the competition in the market. One of Nutanix’s hallmarks is also uncompromising simplicity, so both products can be quickly and easily deployed.

For those of you familiar with our joint solution for VMware, in there we deployed a small Veeam backup proxy VM on each node which locally backed up the VMs on that node. Hyper-V is a bit different, and actually more streamlined. Veeam installs a tiny backup agent on each Hyper-V parent partition, which handles the backup proxy functions. This means you don’t need to deploy a new VM on each node, saving some physical resources. The model is essentially linear scale-out of your backup infrastructure, distributing the load across your Nutanix nodes. Great complimentary technology in action.

Nutanix CVM

Since Nutanix fully supports multi-hypervisor deployments, it’s great to see the ability to leverage Microsoft VSS snapshots as part of the backup process. Veeam can take application consistent backups of enterprise applications like SQL, Exchange and Active Directory by leveraging Nutanix-based SMB 3.0 VSS snapshots. You are not relegated to just crash consistent backups, which may not meet your organization’s requirements. Support is provided in Nutanix NOS 3.5.4, and later, including 4.0.

VSS

One of the great aspects of our joint whitepaper is the variety of deployment models that we cover. This ranges from an all Nutanix solution, to hybrid using an existing physical Veeam backup server, or a dedicated backup appliance. Every customer is different, and this choice lets you pick which one best fits your environment.

2014-07-09_10-28-14The full gamut of Veeam restore are available to Nutanix customers, including the ability to do fast restores and directly test your backups. No restore modifications are needed if you are using the Nutanix platform.

Best Practice Checklist

As part of the whitepaper we provide a detailed best practices checklist, so you can quickly see what the join solution recommends and make sure you are following them. I won’t cover all 16+ here, but here are some highlights:

  • Use Hyper-V 2012 R2
  • Use a 64-bit operating system for the Veeam server(s)
  • Use Veeam Backup and Replication 7.0 patch 4 (or later)
  • Avoid active full backups and use reversed incrementals or forward incremental with synthetic full
  • Deploy a Veeam proxy agent on each Hyper-V parent partition
  • Configure backup jobs to use VSS for application consistency
  • Use Nutanix NOS 3.5.4 or 4.0 (or later)

Summary

A lot of collaboration went into whitepaper, and went well beyond just Luca and myself writing the paper and getting it out of the door. We also tested the solution in the lab, to verify the settings and software versions worked as advertised. The VMware version of the paper was very well received, and so I hope this Hyper-V version is equally helpful to customers. You can download the full 23 page whitepaper here.

Nesting Hyper-V 2012 R2 on ESXi 5.5

imagesSince joining Nutanix I’ve had the opportunity to get exposed to Microsoft Hyper-V 2012 R2, as our platform supports the three most common hypervisors: VMware vSphere, Hyper-V, and KVM. I’m now embarking on writing some Hyper-V guides for Nutanix, and wanted a way to leverage my existing ESXi 5.5 Nutanix block to learn about Hyper-V networking. While I’m very familiar with VMware networking, this project presented itself as a great learning opportunity for Hyper-V. This article will show you how to nest Hyper-V 2012 R2 on ESXi 5.5.

My first challenge in getting a proper Hyper-V test bed setup was to deploy Windows Server 2012 R2 on my ESXi 5.5 express patch 4 host, then get the Hyper-V role installed. Now what I’m about to do is very unsupported, and I’m only doing it for my personal learning and quickly deploy a Hyper-V “learning” lab. After some extensive Binging and trial and error, I’ve narrowed down the unsupported tweaks needed to successfully run Hyper-V 2012 R2 on VMware ESXi 5.5.

Let’s get to it!

1. Deploy your standard Windows Server 2012 R2 template. Mine happened to be fully patched, and included the spring “update” which gave us back a semi-functional start button. I also used customizations specifications to automatically rename the VM, install license key, change the SID, etc. Nothing earth shattering here. I also used vHW v8, versus the newer v10 VM.

2. Power off your freshly deployed WS2012 R2 VM, and unregister it from vCenter.

3. Download the corresponding .VMX file to your computer and open it in Wordpad.

4. Somewhere in the VMX file add the two following lines:

vhv.enable = “TRUE”

hypervisor.cpuid.v0 = “FALSE”

2014-06-19_18-59-58

5. If you have upgraded your VM to vHW 10 then you can follow William Lam’s tip and set the guestOS to use to be “windowsHyperVGuest”. If you are using vHW v8 then I just left it to the default “windows8svr-64”.

2014-06-20_8-03-26

6. Save the VMX file and re-upload it to the datastore, overwriting the old file.

7. Right click on the VMX file and register the VM.

8. Now I didn’t need to do this, but saw some other users that had to configure this setting. In vCenter open the properties of the VM and change the CPU/MMU virtualization option. Select the bottom option.

2014-06-19_19-04-14

 

9. Power on your VM, then login to Windows.

10. Install the Hyper-V role, and you shouldn’t get any warnings. Reboot after the roll is installed, and now you are ready to rock and roll with Hyper-V 2012 R2.

2014-06-19_19-16-51

TechEd 2014: Mark and Mark on the Cloud

Session DCIM-B386: Mark Russinovich and Mark Minasi on Cloud Computing. Mark and Mark are probably easily the top two speakers each year at TechEd. Between their delivery style and technical content, you can’t beat them. This session had zero slides, and was more of a Q&A format. Minasi asked Russinovich a variety of questions. I’ve captured some of the highlights in the session notes below. For the full effect, and lots of jokes, watch the video on Channel 9 whenever it gets posted.

  • Azure will double capacity this year, and then double again next year. They have over one million servers today and buy 17% of the servers worldwide.
  • Parts of Azure update from daily to every three weeks. Different components have different release cadences.
  • Azure hyper-v team branches the code base with new features, then the Azure features are rolled back into the general public release in the future. The merging and branching happens continuously.
  • Boxed products like Windows Server have a much longer test cycle than Azure releases. Different risk mentality.
  • Azure now runs on stock Hyper-V 2012 R2. Previously it was running a branched WS2012 hypervisor.
  • Building Azure is speeding up the pace at which features are added to Windows Server and other MS products.
  • The cloud is becoming cheaper and cheaper. Automation drives the cost of computing down. You must force yourself to automate.
  • Azure buys a zillion servers, custom white boxes, and intense automation drives down the prices.
  • Mark R. states there will be on-prem “forever”. For example, you still see mainframe today.
  • We are at the beginning of the hockey stick and haven’t hit the inflection point for cloud migrations.
  • On-prem wil still be growing for the next several years. But the cloud will be growing much, much faster than on-prem.
  • As the cloud scales up, that’s where all the innovation and investments will go.
  • On common path to the cloud is dev/test. Developers are in a hurry and can easily spin up VMs and don’t have to wait for IT. They are off and running and no need to wait for on-prem resources. Less security concerns.
  • Another common scenario is using the cloud for DR. Maybe companies will just leave it in the cloud after a failure.
  • Three major cloud players: Azure, Amazon, Google. The others in the short term will still exist, but over the years will fall away.
  • Cloud providers need a global presence and footprint, and takes 3 years and $1b per datacenter to build out. MS is building out 20 concurrent datacenters right now. Small cloud providers just can’t compete on that scale.
  • Microsoft thinks they are the best cloud player because customers already have MS software on-prem and know it well. MS has a good connection with customers/products. Azure has Active Directory, which lets you use on-prem credentials for the cloud. Same role based access controls.
  • Active Directory is the center of gravity for cloud identity.
  • Office + Active directory worked extremely well for on-prem, and Azure is duplicating that in the cloud.
  • Over the next two years MS will increase the ‘same experience’ between on-prem and Azure, first starting with developers. Second priority is production workoad similarity. Application and management consistency between on-prem and Azure.
  • IP addresses in Azure are not static. If you power cycle (not reboot) a VM it may/will get a different IP address.
  • This week MS announced true static IPs in Azure. You get 5 static IPs for free with every subscription.
  • Multiple NICs are coming to Azure VMs “soon”
  • Azuze storage can be geo-replicated at an additional cost
  • Azure offers “site recovery” feature. Symantec is offering Azure backup targets.
  • Microsoft says a bug that would expose customer data to other customers would be “catastrophic” and may be end of the cloud.
  • Microsoft is very concerned about data security
  • Microsoft does not datamine from VMs in Azure
  • MS is working on encryption technology where you can do compute on encrypted data but MS will not have access to the data.

Beyond informative, the session was very entertaining. I definitely recommend watching the video for the full effect.

 

 

TechEd 2014: Software defined storage in WS2012 R2

Session: DCIM-B349. Software defined storage with Windows Server 2012 R2 and System Center 2012 R2. This was a jam packed session with tons of content on each slide. Great in-depth talk about what’s new in the 2012 R2 wave which came out last year. I only captured 25% of the slide content below, so be sure to check out the Channel 9 video and slide deck when they get posed, for all the goodies.

Storage Enhancements

  • New approach to storage: File based storage (SMB3) other Ethernet networks. Cost effective storage.
  • Faster enumeration of SMI-S storage providers
  • Virtual Fibre Channel integration in SC 2012 R2
  • SC can now leverage ODX for fast VM copy operations
  • Investments in Fibre Channel switch discovery and pulling that into VMM. Shows a demo of creating a FC zone in VMM. Also shows provisioning a LUN from with VMM from a Fibre Channel array. You can configure a LUN in a service template, so all VMs get access to the LUN.

Focused Scenarios for 2012 R2 Wave

  • Reducing CAPEX and OPEX

Infrastructure-as-a-Service Storage Vision

  • Dramatically lowering the costs and efforts of delivering IaaS storage services
  • Disaggregrated compute and storage – Independent management and scale at each layer
  • Industry standard servers, networking and storage – Inexpensive networks, inexpensive shared JBOD storage
  • Microsoft is heavily investing in the SMB protocol and will use this going forward as the basis of storage
  • Overall objective is to reduce cost. The cheapest storage is the storage you already own.
  • Ability to use “Spaces” with low cost JBOD
  • Ability to manage the full solution within System Center

Storage Management in System Center 2012 R2

  • Insight, Flexibility, Automation
  • Storage Management API (SM-API)
  • New architecture for 10x faster enumerations
  • Capacity management, scale-out-file-server, and a lot more

Guest Clustering with shared virtual disks

  • Guest clustering with commodity storage
  • Sharing VHDX files
  • VM presented a shared virtual SAS disk

iSCSI Target Server

  • VHDX support
  • Support up to 64TB LUNs
  • Dynamically grow LUNs

File Based Storage

  • SMB Direct support (uses RDMA)
  • 50% improvement for small IO workloads

Scale out File Server

  • SMB session management for back-end IO distribution

Live Migration

  • SMB as a transport for live migration
  • Delivers performance using RDMA – so no CPU hit on the host
  • Adds compression (75% faster)

SMB Bandwidth Management

  • Restrict bandwidth for different workloads (e.g. file copy, live migration, storage access)

Data Deduplication

  • Can dedupe open files – VDI scenarios is a good use case
  • Good for high reads, low write VHDXs
  • Added support for CSV

Storage Spaces

  • Optimized data placement – Pool consists of both HDDs and SSDs with automated tiering
  • Write-back cache – Smooths out workload IOPS

TechEd 2014: Network tuning for specific workloads

Session: DCIM-B344, Network Turning for Specific Workloads. This was a great session, with a ton of Q&A during and after the main presentation was over. I’d highly encourage you to watch the full video on Channel 9 when it is uploaded to get all of the goodies. The session notes below are a small fraction of the gold nuggets that were discussed in the session. Confused about VMQ, RSS, vRSS, SMB multi-channel performance, virtual switches, NIC teaming and when to use what feature? Be confused no more after watching the video.

Terminology:

  • Socket is a NUMA node, and within the node is a core. On the core you have logical processors (with hyper-threading), on which you have virtual processors for VMs.

Scenarios

Problem 1: Enterprise physical web server and file server. Large volume of incoming packets, but one core is highly utilized.

Solution: Enable RSS on the server. RSS is for physical servers only. NIC spreads the network traffic by TCP/UDP flows across different cores to enhance performance and balance processor utilization.

Problem 2: A VM is deployed and the incoming packet processing is saturating a limited set of cores.

Solution: Virtual machine queue. VMQ spreads traffic per vNIC. RSS is disabled on the pNIC when a virtual switch is defined. A single core is bottlenecked at 4-5 Gbps of traffic, depending on processor speed. VMQ is enabled by default, so no manual configuration is needed. Number of queues depends on the physical NIC properties. New NICs have more queues (64+ not uncommon).

Problem 3: A VM has a large number of incoming packets, such as a web server. The workload is limited to using one vCPU. This is only for VMs with >3 Gbps of traffic. Less traffic can be serviced by a single core without any additional configuration.

Solution: vRSS can be used on WS2012 R2 VMs. This spreads traffic across multiple vCPUs. Flows are moved if a CPU has 90% or higher utilization. MS states they have seen line rate up to 40Gbps to a VM using vRSS with a 40 Gbps NIC. vRSS must be manually enabled inside of the VM.

Problem 4: A highly latency sensitive application, such as high speed financial trading.

Solution: Use SR-IOV. Bypasses the virtual switch, and directly connects the VM to the hardware NIC. Only for use with trusted VMs, since switch security is bypassed. Rarely used, but available for these very limited cases.

NIC Teaming

Windows Server 2012 R2 has a new dynamic NIC teaming mode. Continuously monitors traffic distribution. Actively adjusts traffic based on observed load. Download the Windows Server 2012 R2 NIC teaming guide here.

TechEd 2014: Effortless migration from VMware to Hyper-V

Breakout Session DCIM-B412: Effortless Migration from VMware to Windows Server 2012 R2 Hyper-V

If there was any session at TechEd that could initiate the self-destruction sequence for my VCDX certification, I think this session would be at the top of the list. That being said, it was a good session for folks looking to move VMs from VMware to Hyper-V. The session covered six tools, some free, some paid, that can make the conversion process fairly painless. Some require more downtime than others, or require scripting for mass migrations.

Quick look at Hyper-V 2012 R2

  • Consistent platform between Windows Azure, customer, and service providers

Microsoft Assessment and Planning toolkit (MAP)

  • Agentless inventory and assessment tool
  • Reporting, free, and now at version 9.0
  • Securely assesses IT environments on various platforms including physical and VMware
  • You can specify an inventory scenario, and it will directly connect to the VMware SDK (ESXi or vCenter) to do the inventory
  • Server consolidation report, VMware discovery report, Microsoft workload discovery (SQL, Exchange, etc.)

Six Migration Approaches

  • Microsoft Virtual Machine Converter, VMM, NetApp SHIFT, NetIQ platspin, Vision Solutions Double take Move, Migration automation toolkit

Microsoft VMM 2012 R2

  • Supports vCenter 4.1, 5.0, 5.1, ESXi 4.1, 5.0, 5.1
  • VMM can connect directly to vCenter or ESXi hosts via a simple wizard
  • VMM enables the direct migration from VMware to Hyper-V via a migration wizard
  • VM must be turned off during the migration (cold migration)
  • The wizard migrates the disk controller, allows you to select the SMB share to store the VM on, VLAN/port assignment, availability settings, and can start VM after the migration is complete.
  • 50GB VM takes about 15 minutes to migrate
  • VMM is not the best tool for mass migrations, but there are other tools for that

Microsoft Virtual Machine Converter 2.0 (MVMC)

  • Fully supported by Microsoft support
  • Free download from Microsoft.com
  • Enables VMware to Hyper-V or Azure migrations
  • Fully scriptable via PowerShell
  • Supports a wide ranges of OSes
  • Inventories VMware and enables
  • Windows Server 2003 through 2012 R2, and vSphere 4.1, 5.0, 5.1, 5.5 support
  • Does not depend on VMM (fully standalone tool)
  • Runs on a management computer
  • Requires a cold migration
  • Simple GUI migration wizard
  • Automatically de-installs VMware Tools, and installs Hyper-V integration pack
  • For Azure migrations it will just upload the VHDX to a storage container but will not create the VM (need extra steps for that)
  • Tool uses certificate authentication with Azure

Migration Automation Toolkit (MAT)

  • Allows to script and scale MVMC conversions
  • Free download from TechNet Gallery
  • Still uses MVMC under the covers
  • Limited to three concurrent migrations per management computer (can use multiple computers)
  • Driven by PowerShell, uses SQL Express, extensible and customizable
  • Provides simple reporting and management in the solution
  • Fully supported by Microsoft

NetApp SHIFT

  • Based on Microsoft MAT, but uses Data OnTap 8.2 to convert the VMDK at VHDX at lightening speed
  • Migrations take seconds per VM

Vision Solutions – Doubletake MOVE

  • Migrate physical to virtual, virtual to virtual
  • Not a free tool
  • Uses agent in the source VM and agents on the target
  • Performs a full copy while the VM is running
  • Block level changes are replicated during the migration process
  • Preserves write-order consistency
  • Performs a live failover of the VM
  • Performs a test failover (minus the network adapter)
  • Supports migrating to Azure as well
  • Can enable compression or bandwidth limits if replicating over the WAN

NetIQ Platespin Migrate

  • Supports Windows and Linux workloads
  • Multi-OS support
  • Supports hardware migration (Vendor A to Vendor B)
  • Virtual capacity planning and analysis tools
  • Updates hypervisor tools and drivers automatically
  • Minimal downtime

Summary

There are a variety of free and paid tools to enable your migration from VMware to Hyper-V. Some are more automated than others, and required downtime also varies. The bottom line is that migrations can be fairly easy, and you can even migrate VMs to Azure if you wish.

TechEd 2014: Converged Networking for Hyper-V

Breakout session: DCIM-B378

This was a great session which covered a multitude of NIC features (VMQ, RSS, SR-IOV, etc.), when to use them (or not), which features are compatible with each other, and other Hyper-V networking topics.

Historical topology for Hyper-V was discreet NICs for different traffic types (management, storage, migration, cluster, VMs, etc.). This resulted in a lot of physical NICs, and it got out of control. Now, we assume two 10Gb NICs with virtual networks all over the same physical interfaces. You can also have a converged topology using RDMA for high-speed, low latency requirements. Also, SR-IOV can be used for specific VMs for fast, low latency guest networking.

Demands on the network: throughput, latency, inbound, outbound, north/south, east/west, availability

NIC Teaming (In the host OS)

  • Grouping of 1 or more NICs to aggregate traffic and enable failover
  • Why use it? Better bandwidth utilization
  • Doesn’t get along with SR-IOV or RDMA
  • Recommended mode: switch independent teaming with dynamic load distribution
  • Managed via PowerShell (netlbfo cmdlets)
  • *-netadapter
  • You can create a NIC team in VMM, and there’s a wizard to create the NIC team

NIC Teaming (In the guest OS)

  • Why use it? Better bandwidth utilization
  • Loss of NIC or NIC cable doesn’t cut off communications in the guest
  • Provides failure protection in a guest for SR-IOV NICs
  • set-vmnetworkadapter

VMQ, RSS and vRSS

  • What is it? Different ways to spread traffic processing across multiple processors
  • RSS for host NICs (vNICs) and SR-IOV Virtual Functions (VFs)
  • VMQ for guest NICs (vmNICs)
  • Why use it? Multiple processors are better than one processor. vRSS provides near line rate to a VM on existing hardware.
  • RSS and VMQ work with all other NIC features but are mutually exclusive
  • Get-netadaptervmq to see how many hardware queues your hardware has
  • VMQ should always be left on (it is by default)
  • get-adapterrss from the guest

Large Send Offload

  • Allows a NIC to segment a packet for you and saves host CPU
  • LSO gets along with all Windows features and is enabled by default

Jumbo Frames

  • Way to send a large packet on the wire
  • Must be aware of end to end MTU
  • Reduces packet processing overhead
  • Gets along with all other Windows networking features
  • Use it for SMB, Live migration, iSCSI traffic..they will all benefit
  • ping -l 9014  and see if it succeeds or fails (use do not fragment flag too)
  • Must set the size on both the hyper-v host level and within the guest
  • Virtual switch will detect jumbo frames and doesn’t need manual configuration

SR-IOV

  • Highly efficient and low latency networking
  • Can see 39 Gbps performance over a single 40 Gbps NIC
  • Doesn’t play with NIC teaming (host), but does work with guest NIC teaming
  • ACLs, VM-QoS will prevent SR-IOV from being used
  • Should only be used in trusted VMs
  • Can’t have more VMs than NIC VFs (virtual function)/vPorts
  • The NIC can only support a single VLAN and MAC address

Demands on the Network

  • Bandwidth management – Live migration can saturate a 10Gb/40Gb/80Gb NIC

Quality of Service

  • Hardware QoS and software QoS cannot be used at the same time on the same NIC
  • Software: to manage bandwidth allocation per VM or vNIC
  • Hardware: To ensure storage and data traffic play well together
  • QoS can’t be used with SR-IOV
  • Once a Hyper-V switch is configured for QoS you can’t change the mode (weight, absolute bandwidth). Weights are better than absolute.

Live Migration

  • Microsoft’s vMotion
  • Three transport options: TCP, compression, SMB
  • SMB enables multiple interfaces (SMB multi-channel) and reduced CPU with SMB direct
  • Gets along with all Windows networking features but can be a bandwidth hog
  • 4-8 is a good number for concurrent migrations (default is 2)

Storage Migration

  • Microsoft’s storage vMotion
  • Traffic flows through the Hyper-V host
  • 4 concurrent migration is the default and recommended number

SMB Bandwidth Limits

  • Quality of service for SMB
  • Enables management of the three SMB traffic types: Live migration, provisioning, VM disk traffic
  • Works with SMB multi-channel, SMB direct, RDMA
  • Able to specify bandwidth independently for each of the three types via powershell

Summary

Converged networking falls apart if you don’t manage the bandwidth. Implement QoS! Don’t just throw bits on the wire and “hope” that everything will be fine, as it probably won’t be when you start having network contention.

 

 

 

 

 

TechEd 2014: Deploying the Azure Pack

Session: DCIM-B317

The Cloud OS is transforming IT to address new questions: Mobility, apps, big data, cloud. Provide a cloud platform regardless of the datacenter that it is deployed on (Azure, partner, on-prem). Cloud OS enables nodern business apps, empower people centric IT, unlock insights on any data, and transform the datacenter. Cloud platform includes Outlook.com, XBOX live, Bing, Office 365, MSN, Dynamics CRM online. It includes high performance storage, multi-tenant with isolation, software-defined networking, policy-based automation, and application elasticity.

Enterprises want: flexible cloud, no vendor lock-in, multi-tenant clouds, chargeback, simple, automated, tenant choice, dynamic control, integration with LOB systems, effective utilization of existing hardware assets.

Service providers want: Win more enterprise business, usage billing, extreme automation, opportunities to upsell, customized offerings, portal integration and branding.

Common requirements: Enterprise friendly, multi-tenant IaaS, usage billing, automation, maximize hardware utilization, tenant choice, offer management, portal integration.

Windows Azure Pack

In your datacenter MS is offering a tenant portal & API that layers on top of your existing infrastructure. It also adds an admin portal & API featuring automation, tenant management, hosting plans, and billing. This all sits on top of System Center + Windows Server. Delivers a customer-ready self-service to a private cloud environment.

Presenter shows a diagram that has many components including: firewall, web app proxy, WAP tenant, RD gateway, WAP admin, ADFS, VMM, SQL, DC, hyper-V hosts, and tenant workloads.

Windows Azure pack is comprised of 13 components/installers. This includes admin site, tenant site, admin auth site, tenant auth site, admin API, tenant public API, tenant API, PowerShell API, BPA, Portal & API express.

Authentication options include: out of the box, ADFS, web application proxy, Azure AD, multi-factor authentication.

At this point the presenter did several configuration demos. Those are best seen via the video, and would be hard to describe it a coherent manner otherwise.

Service Provider Foundation

  • Requires four groups in the management AD instance
  • Two service accounts, one in AD and one local on the SPF server
  • Must have admin rights in VMM and in SQL server

Service Management Automation

  • Key: Start with good use cases and layer on the complexity
  • Remember that SPF must trust the SMA certificate

Summary

For those customers wishing to deploy the Windows Azure pack, this was a good session.  If you want to deploy the Azure pack, then download the session video and get some good configuration pointers. Do keep in mind the configuration is not for the faint of heart. I hope in the next version of the pack/Windows (2015?) that it will be greatly simplified.

TechEd: Comparing Microsoft and VMware Private Clouds (MDC-B352)

This was Part 2 of a two part series on comparing VMware and Microsoft virtualization/Cloud offerings. Part 1 was focused on the hypervisor and how Hyper-V and ESXi compare. I had a schedule conflict with part 1, so I didn’t attend it. This is part 2, focusing on the private cloud offerings. I thought Microsoft did a decent job in the 75 minutes provided. VMware has a leg up in areas, while other areas Microsoft has a leg up or a longer track record (such as Operations and Configuration manager).

A lot of differences in both products were not discussed, and would take a lot more time than 75 minutes. But it’s clear with Windows Server 2012 R2 and System Center 2012 R2 that they are making rapid and big strides in the private cloud and virtualization arena. Now that VMware and Microsoft appear to be on a yearly release cadence, I see the “Cloud OS” battle really heating up. MS has a lot of ground to make up, and they clearly knew it.

Private Cloud Technologies

Speaker acknowledges this is not a perfect comparison, as some products from each vendors package up features differently. For example, vCloud Director does a lot more than just self-service, but MS VMM has vCloud directly-like functionality not found in vCenter. So you can’t exactly line up products and say they are the same. But combine the entire stack from each vendor to really see how they shape up instead of doing per-product comparisons.

  • Hypervisor: Microsoft – Hyper-V; VMware – vSphere Hypervisor
  • VM Management – Microsoft – VMM; VMware – vCenter Server
  • Self-Service – Microsoft – App Controller; VMware – vCloud Director.
  • Monitoring – Microsoft – Operations Manager; VMware – vCenter Operations Management Suite
  • Protection – Microsoft – Data Protection Manager; VMware – vSphere Data Protection
  • Service Management – Microsoft – Service Manager ; VMware – vCloud Automation Center
  • Automation – Microsoft – Orchestrator; VMware – vCenter Orchestrator

Private Cloud Software Licensing

For both suites both vendors license the products by the socket basis. You can buy some VMware products a la carte, and some lesser known products aren’t included in the vCloud Suite. So depending on what features you need, you may need a different set up products.

  • Microsoft – System Center 2012 SP1 (per socket) & Hyper-V
  • VMware – vCloud Suite & vCenter

Key Focus Area for this Session

  • Granular App & Service Deployment
  • Deeper insight and remediation
  • Protection for key apps andworkloads
  • Hybrid Infrastructure
  • Costs

Granular App & Service Deployment

  • On VMware you use templates to deploy standardized templates. Templates are simple, but static.
  • In VMM you also have a dedicated Library to VM templates (like VMware) and service templates
  • In VMM you can have lots of templates all pointing to the same VHDX image (templates can have different features/etc.). Or small, medium, large, etc. templates all pointing to the same OS image.
  • In VMM you can add roles/features to the guest VM template and capture them in the template
  • You can have separate guest profile, and can marry up them with a hardware profile and a VDHX image without using any extra disk space
  • In VMM you can add applications, such as SQL, and easily create a template
  • VMM can directly configure App-V server packages and inject them into the VM template
  • VMM 2012 has a concept of service templates. Service template allows you to build and model multi-tier services. Ability to configure scale out rules, for example. Drag and drop VM templates onto a canvas and you can customize the VM properties.
  • Anything you can do in VMM you can do in PowerShell
  • VMM is more about delivering services to the business unit, not just deploying individual VMs
  • “Create Cloud” button in VMM. Defines resources, networks, load balancers, VIP templates, Port classifications (NIC), Storage, library, define capacity quotas (vCPUs, memory, storage, VMs, etc.). Ability to select hypervisor (Hyper-V, VMware, XenServer).

Service Manager

  • IT self-service management portal, built on SharePoint (also a full helpdesk ticketing system)
  • ITaaS offering
  • Plugs into VMM, Orchestrator
  • BI is built into service manager for deep reporting
  • Download “Cloud Service Process Pack” which pre-configures VMM, Service Manager and Orchestrator for a self-service VM portal

Orchestrator

  • Custom automation with minimal scripting needed
  • MS Orchestrator has a lot of plug-ins for third party products and hardware (integration packs)

Operations Manager

  • Extensible with MS and third-party management packs. Veeam MP can do deep monitoring of VMware environments.
  • Veeam MP is not free, so if you want to monitor VMware with SCOM you will have to license the excellent MP
  • OpsMgr can also monitor network infrastructure (switch CPU usage, memory, port-level stats, etc.)
  • Maintains the relationship between VMs and physical hardware such as switch ports, etc.
  • Server-side, client-side and synthetic transactions for application monitoring
  • Global Service Monitor (GSS) – MS Azure based global services that will test your private cloud app

Visual Studio Integration

  • VMM Library is accessible from Visual Studio
  • Team Foundation Server can use the “Test & Lab Manager” which will spin out VMs for automated dev testing via VMM

System Center Advisor

  • Provides configuration guidance around specific workloads (SQL, etc.) for troubleshooting. Free from MS.

Data Protection Manager

  • Supports Windows server, SQL server, SharePoint, Exchange, Dynamics
  • Up to every 15 minute differential backups
  • DPM can backup to Azure and tape
  • Changed block tracking for VM backups
  • Cluster aware – integrates with CSV
  • Item-level restore
  • DPM has no inline dedupe, but VMware data protection does

Heterogeneous Environments

  • VMM can connect to and provide basic management of vCenter
  • Can use VMM service templates on VMware hosts
  • Many integration and management packs for third party software and hardware (HP, NetApp, Cisco, etc.)

Hybrid Infrastructure

  • Private cloud (VMM can manage XenServer, vSphere, Hyper-V)
  • System Center can link to Service Provider and Azure
  • Single Sign on with AD (Azure)
  • Integrated with DEV (Team Foundation)

Cost Scenario

Cost scenarios can be extremely tricky and misleading. Plus large enterprises will likely get big discounts from both VMware and Microsoft. So take the numbers below with a grain of salt. Not in the cost calculation is the cost of the guest operating systems, since it was assumed both used the same OSes so the cost was a wash. The costs were only for the hypervisor and cloud stack.

The speaker didn’t mention the Microsoft ECI license (enrollment for core infrastructure). This combines the operating system and system center stack licenses into a single SKU, licensed by the socket. The datacenter edition of ECI allows unlimited VM deployment and management using all cloud features. Even if you are a 100% VMware shop for the hypervisor,  you may still have the ECI license if you use system center components (such as SCCM or SCOM). So you may already be fully licensed from the MS perspective and incur no additional software costs for the MS cloud stack.

  • Example: 500 VM Private cloud; 15:1 VM to host ratio; 34 hosts, 2 sockets with 16 cores; Windows Server licensing additional; comprehensive management; 68 licenses of Windows server datacenter
  • 68 CPUs Hyper-V: $0; 68 CPUs of System Center $122K
  • 68 CPUs vCloud Enterprise Suite $781K, vCenter $5K