This session is the first installment of a three part series on how to configure your private cloud with VMM 2012. Clearly, this session covers networking and storage, two fundamental pieces of your cloud fabric. I was impressed with the depth of the integration of networking and storage. More amazingly, it’s hypervisor agnostic for most all features so every vendor including arch rival VMware are treated fairly.
Tidbits from the session include:
- Ability to define logical networks using VLANs and subnets per datacenter location. For example, you can configure separate pools for Boston, LA, DC, and London. When you deploy a VM to a location, it automatically uses the right pool and only presents to you the logical pools you can use. You can’t accidentally assign a VM in Boston a London IP.
- Address management for static IPs, load balancer VIPs, and MAC addresses (both VMware MAC address range and general MAC address range). VMM uses a check-in and check-out mechanism for static IPs. No more needing spreadsheets to keep track of your IPs. Select the proper pool, and it will use the next unused address. Delete the VM? That IP goes back into the pool. 100% automated static IP assignments. Sweet! Same thing for MAC addresses and HLB VIPs.
- Automated provisioning of F5, Citrix NetScaler and Brocade (at RTM) hardware load balancers. F5 and Citrix both now have virtual LB appliances, BTW. The lack of Cisco support is a bit surprising.
- You can define HLB VIP templates that define properties such as protocol, LB method, persistence, and health monitors. You assign a HLB to a site, so when you deploy an application it automatically uses the proper physical HLB, and check-out the proper IPs.
- Storage component discovers storage arrays and pools of storage, let’s you classify storage based on capabilities you dictate (throughput, availability, etc.), discover and configure LUNs and assign to Hyper-V hosts and clusters. You could have platinum storage, gold storage, silver storage, or bronze storage (or whatever names you want).
- Storage capabilities include end-to-end storage device mapping, allocation and assignment of storage, provisioning a VM using SAN array hardware copy capabilities, and storage migration of a VM (.e.g. storage vMotion).
- End-to-end mapping is truly end-to-end: service instance to VMs, to logical disks in the guest, to the guest volumes, to the physical logical disk, to the LUN, to the array disk pool, to the disk array, to the array provider. This information is fed to SCOM for event/performance correlation (very sweet).
- Uses a standards based approach for discovery, SMI-S v1.4. Many vendors are working on providers if they don’t already have them.
- Supported storage types include Fibre Channel, iSCSI, and local storage. (Not sure about FCoE but I think it is supported.)
- Supports configuring iSCSI masking/unmasking, and initiator logon/logoff parameters.
- Supports Fibre Channel masking/unmasking, and NPIV vPort deletion/creation.
- This is NOT a storage management tool, so you won’t use VMM to create an entirely new LUN on your storage array. You will continue to use your array’s tools. Likewise for the network, this will not create VLANs in your network, but will consume them. (Although you could use Orchestrator 2012 to automate the creation in the array/switch, then have VMM discover it.)
The speaker when through many demos, showing how VMM 2012 discovers networking and storage assets and brings them into the fabric. For VMware it supports both standard vSwitches and virtual distributed switches (didn’t get clarification on the Nexus 1000v, but I’d hope that is covered by their vDS support). I was very impressed by the capabilities, and the attention to detail. As the speaker mentioned, MS really should have renamed VMM to something else, like Cloud Manager. It’s easy to think VMM is just for managing VMs, and in the 2012 case that is so not the case.