VIR401: RDP, RemoteFX, ICA/HDX, EOP and PCoIP: VDI Remoting Turned Inside Out

This session was presented by two non-Microsoft speakers: Benny Tritsch from AppSense and Shawn Bass from Syn-Net. After seeing their extensive testing methodology and the presentation of the results, I feel confident that the results are accurate and as neutral as one could get. The whole session was presenting the results of a battery of tests against the RDP, RemoteFX, ICA/HDX, PCoIP, EOP, Blade and RGS remoting protocols. The battery of tests included 2D graphics, video and animation, 3D graphics, LAN performance, and two WAN performance scenarios using a WAN simulator. They clearly went to great lengths to fairly test the protocols and compare the results.

One of their comments was that these remoting protocols have undergone significant changes in the last three years, and they would have not predicted the rapid advances and very good WAN performance. So for those organizations looking to implement VDI across the WAN, this is very good news. While all is not perfect for VDI WAN usage, the industry is pouring a lot of money and resources into advancing the technology.

Given the nearly all of the session was presenting video clips of all the tests, it’s really hard to summarize the results in a meaningful way. Thankfully, Microsoft has links to the audio/video recordings of the session so you can see the results for yourself. You can check out my TechEd 2011 trip report here, which has links to all of the sessions I put on my schedule. Use your browser and search on that page for VIR401.

In case you don’t want to listen to the entire 75 minute session, here’s a tiny snipit of the results:

  1. When using the Microsoft RDP client (mstsc),  it is VERY important to select the appropriate ‘experience’ if you are using it over a WAN. Selecting the Satellite setting can greatly improve the performance over WANs by using latency ‘hints.’
  2. VMware View does not support any type of multi-media redirection with Windows 7. If you want MMR, look at Citrix HDX (or EOP).
  3. None of the protocols can redirect SilverLight or WPF content.
  4. Most of the protocols are very comparable on the LAN. Bandwidth utilization does very somewhat by protocol. For example, watching a SilverLight video over the LAN you can see bandwidth spikes to 45Mbps or more with RemoteFX.
  5. At 50ms WAN latency, you start getting a degraded user experience. Sometimes even at 20ms the experience can suffer.
  6. Packet loss is a huge factor in perceived WAN performance. Their tests used a .01% packet loss, which is typical of a MPLS circuit, but lower than a regular internet connection.
  7. Newer protocols like RemoteFX require additional hardware, be it GPUs or ASICs. Hardware assist generally does provide better results, so expect to see more hardware dependencies in the future.
  8. In the WAN scenarios there’s not a huge user experience difference between Citrix HDX and VMware View. In the charts shown, VMware View did use more bandwidth than HDX.
  9. Not all protocols support all versions of DirectX and OpenGL. So you really need to look at the applications you are using and what graphics subsystem they require. OpenGL 1.1 is pretty broadly supported. However, RemoteFX only supports 100% of OpenGL 1.1 features, and at 1.2 and higher it’s basically unsupported. On the other hand, Citrix HDX 3D supports all features through OpenGL 3.3 and much of OpenGL 4.1.
  10. For an optimal user experience, 2Mbps of bandwidth is best.
  11. If you are just using Office type applications for text manipulation, over the WAN most of the protocols do a really good job. Multi-media and graphics manipulation is where the differences really start to show up.

I’d really encourage you to watch the videos at my link above, so you can see the visual differences between the protocols. No one protocol stood out as horrible in every test, and no one protocol wiped the floor either. You need to look at the applications you want to use, performance requirements, hypervisor requirements, and other factors to make a determination on what VDI protocol you want to select.

In reality your primary choices are Microsoft RemoteFX, Citrix HDX, and VMware PCoIP. VMware View is limited to the ESX/i hypervisor, and RemoteFX is limited to Hyper-V. So that leaves HDX as the primary hypervisor independent protocol.

SIM210: Sneak Peek at Service Manager 2012

One of the objectives for SM 2012 is to deliver IT as a Service (ITaaS). ITaaS objectives are to reduce costs, increase service levels, faster time to delivery, provide more data, more transparency, and increased compliance. Implementation components include automation, standardization, self-service, and compliance. The deployment of these components are guided by process designs such as MOF, ITIL and COBIT.

Highlights of this session include:

  1. Integration with Operations Manager 2012, Configuration Manager 2012, Virtual Machine Manager 2012 (new to 2012), and Orchestrator 2012 (new to 2012).
  2. Service manager enables self-service through a portal, reports and dashboards, Excel, and email.
  3. Business processes can be defined in templates and the Configuration Management DB (CMDB), provides a common model and reconciliation of data.
  4. A new compliance library maps legalese to actionable IT control activities. Compliance is continuously and automatically evaluated in real time through SCCM integration.
  5. Improvements from SM 2010 include tracking of incident SLAs, parent/child work items, AD connector improvements, PowerShell integration, parallel activities, and performance improvements.
  6. A new Service Catalog and self-service portal deeply integrate with Orchestrator and VMM to enable ITaaS.
  7. SM imports cloud objects, VMM templates, and runbooks. An admin then creates templates to capture business processes and the role of runbooks within the processes. The admin creates default values to standardize offerings. Roles are then mapped to offerings, to limit access. Born is a self-service portal.
  8. Request processes drive automation. By using a request template, runbook activity, service requests, and connectors, business processes become automated.
  9. The SM portal is completely new and based on SharePoint Foundation 2010 and a Silverlight interface. You can customize webparts using SharePoint admin tools, and it’s very extensible. It features service catalog scoped to users, and customizable dynamic forms.
  10. The System Center Data Warehouse replaces the not-often-used System Center Reporting Manager. This enables self service report and dashboard authoring with OLAP cubes. Report authoring with Office integration for knowledge workers. Although you still may need to have some SQL reporting expertise for super-custom reports, this new OLAP model when combined with Excel really enable powerful data slicing and dicing with limited skills.
  11. There’s a new Exchange connector for enhanced email integration.
  12. You can more easily report on KPIs (Key Performance Indicators).

I was pretty impressed with the amount of enhancements in SM 2012. Given that the shipping product is just over a year old, MS clearly invested a lot of developmental resources into SM. It feels like a new rev of the product, not just a R2 version with minor tweaks. One of the demos that really blew me away was the ability to slice and dice the OLAP cubes in Excel to create custom reports, then upload those report forms in SharePoint to create a live dashboard. With just a few clicks of the mouse Excel was able to instantly drill down into complex data sets, visualize the data, and present meaningful data to the end user.

For organizations that have an existing ticketing system, like Remedy, Service Manager should be seriously considered for your environment. The integration with the entire System Center suite, SharePoint, self-service portal, and reporting is amazing. If you are serious about automating your processes, enhancing compliance, SM 2012 should be at the top of your list for consideration.

VIR313: RemoteFX GPU Virtualization Deep Dive

This session went into great gory depth on how RemoteFX works, and the hardware requirements. What is RemoteFX? RemoteFX is a new technology in Windows Server 2008 R2 SP1 and Windows 7 SP1 that when combined with a VDI environment allows efficient host-side rendering of graphics and multimedia. It requires the use of Hyper-V, and thus does not work on other hypervisors.

It also enables USB device redirection, something not available with previous versions of RDP. Host side rendering has several advantages, including rendering of any content including WPF, Flash, SilverLight, Quicktime, WMP, 3D applications, etc. RemoteFX bumps up the RDP version to 7.1.

Why is RemoteFX important? Well with RemoteFX you invest in server hardware with GPUs, but this allows you to buy very cheap and disposable thin or zero clients that don’t require much computing or GPU power. This is in contrast to some technologies like Citrix HDX that purposefully offload some CPU/GPU processing to the client devices. There’s no one right or wrong way to handle rich graphics, so both architectures have their place.

Enough background, so here are some of the gory technical details covered:

  1. Virtualized GPU. A single GPU can be utilized by multiple Hyper-V guest operating systems. It uses intelligent screen capture and hardware-based encoders. It can utilize hardware-based decode, but this is optional.
  2. The CODEC is designed for text and image-based content. A single CODEC works for VDI, RDS, and WMS (Windows multi-point server) sessions.
  3. USB redirection supports nearly all USB devices, and no client side drivers are required. Admins control what devices are or are not allowed to be connected. It’s integrated with PnP/Windows update so applications do not know the device is redirected.
  4. The virtualized GPU supports Direct-X 9 and GDI. No support for DX10 or higher, in this release. Most applications can use DX9, so this is not a big limitation.
  5. The physical video card must support DX10, since some new APIs are used for more efficient encoding.
  6. Although the GPU is used for much of the encoding work, the server CPU still has some processing load. Hardware manufactures are working on a RemoteFX ASIC which offloads this processing. The ASIC is totally optional, and some should ship mid 2011.
  7. Previous incarnations of RDS used a kernel-mode architecture. With RemoteFX and RDS, there are now user-mode and kernel-mode components.
  8. For VDI, the hardware requirements include a CPU with SLAT (second-level address translation), GPU installed in the server, and a Windows 7 SP1 VM. With this the benefits include GPU virtualization, USB redirection, full Aero glass support, RemoteFX compression using CPU and GPU, and you can offload client hardware decompression to an ASIC.
  9. RemoteFX is NOT another protocol, RemoteFX is built into RDP 7.1 and does not require additional ports to be opened up, or protocols to be allowed. It’s simply an RDP extension. As such it supports all RDP features such as security (SSL, Kerberos, etc.) and virtual channel multiplexing
  10. RemoteFX enables a new class of ultra lightweight and low-power  devices. Operating systems could include Windows CE, Linux and custom OS like Wyse ThinOS. The devices can draw less than 5W of power and could have a dedicated ASIC for CODEC acceleration.
  11. The limitation of how many RemoteFX enabled virtual desktops per server comes down to video card memory and screen resolution. For example, a 2GB ATI V7800 card could support up to 11 VMs running a single 1600×1200 screen. All of the VDI VMs on a server do not need to be RemoteFX enabled, so you can assign power users to RemoteFX VMs, and max out the server with non-RemoteFX VMs to increase VDI density.

Now one of the ‘tricks’ to implementing RemoteFX is putting a very beefy video card into your VDI servers. If you are using blade servers, this really restricts your options. For example, with HP you can only use their “workstation” blade PC with a side-car PCIe expansion slide, which cuts the server density in a 10U chassis from 16 servers down to 8. Given high heat load and the number of PCIe lanes required, blade servers are at a distinct disadvantage for the time being. One solution would be a traditional 1U rack-mount server, a heavy-duty graphics card with 4GB of RAM, and a single 10Gb CNA so you reduce your cabling requirements.

Since the primary scalability factor for RemoteFX is the amount of video RAM on the graphics card, I suspect AMD and nVidia will come out with RemoteFX oriented cards that increase the memory to 8GB or more. For example, a 8GB video card should be able to support over 35 RemoteFX sessions using 1920×1200 screens (typical 24″ monitor).

In most cases not all users need RemoteFX. Task workers such as those using Office products or basic web surfing really won’t benefit that much. But for users that require 3D applications, video playback, or USB redirection, RemoteFX is something to consider. Citrix has stated they are working on incorporating RemoteFX into XenDesktop, and that integration should be completed later this year.

Finally, not all graphics cards are certified for RemoteFX. Microsoft said the initial drivers AMD and nVidia provided were pretty buggy. So be sure to check out the Microsoft HCL for graphics cards and ensure it has been certified for RemoteFX. Also, it was stressed the RemoteFX was designed mostly for LAN based environments. If you have remote offices/branch offices, then RemoteFX may not be a good fit. Future versions may address bandwidth/latency issues, which are the primary reasons why this first version is intended for LAN usage.

VIR315: Modeling and Maintaining Virtualized Services in VMM 2012

This session focused on the brand new services template module in VMM 2012. Service templates are a method to rapidly, consistently deploy, and maintain applications regardless of what hypervisor (Hyper-V, ESX or XenServer) that you use. So this entire session applies to customers even if you are a VMware shop, or a mixed environment.

Session highlights include:

  1. A services template is the starting point for services and source of truth. It specifies machine and connectivity requirements. Deployed services are always linked to their templates. This enables servicing of instances, not just individual VMs.
  2. An instance is a group of VMs working together, it includes machine specific definitions as well as applications. Native application support for web applications (WebDeploy), virtual applications (server App-V package), and database applications (SQL DAC). Future versions of VMM will likely support more types of applications (.e.g. Exchange, etc.).
  3. Why use service templates? You can manage multi-tier applications across multiple servers as a single unit. This allows you to scale out on demand. It also allows you to manage fewer OS images, since you can customize the OS deployment at provisioning time.
  4. The lifecycle includes creating a template, customize the deployment, deploy a service, then update template and apply to a service.
  5. The service templates are authored in a new Service Designer. This defines the tiers, VM hardware, logical networks, OS, applications, load balancer configuration, etc.
  6. The Service Designer is a really slick application with a ribbon interface that let’s you graphically construct your application, link to networks, define instance counts, deployment order, upgrade domains, and servicing order.
  7. A customized deployment of a template lets you define OS settings (computer name, admin password, etc.), application settings (SQL connection string, service account names, passwords, etc.), and lets you use the same template in various environments (dev, staging, production, etc.).
  8. The deployment has several integration points where you can inject scripts to even further customize the deployment, which run in the guest OS.
  9. There are two update types. The first is a regular update where the template changes are applied without replacing the OS image. Changes can include increasing memory, application updates, etc. The second method is an image based update. This replaces an old OS image with a new OS image. This re-installs the application, while preserving state. For example, you could migrate your webapp from Server 2008 to Server 2008 R2, with little to no downtime.
  10. Regular updating and image updating both have extensive integration points where you can inject custom scripts to change the update process.
  11. Service templates can be imported and exported and use a XML file. This lets you share templates between different environments, backup templates, or synchronize them in a multi-VMM environment. Like a GPO import, you can map resources to the template during the import process. For example, you can map network names or storage tier levels, even if they don’t have the same name in the different environments. Very slick!

What was really slick about this session was the demos. While all of the information above is great, you could easily gloss over the potential impact of VMM. One compelling demo was updating a web app.

In the demo the original template was defined with v1.0 of the app, and used a scale out approach with a hardware load balancer. This created multiple instances of the web app, in a HA (high-availability) configuration (let’s say four web servers). Now let’s say you have v2.0 of the application you want to roll out but maintain HA. You open the service template and update the application to v2.0. Nothing has happened in production, since you’ve only updated the template. You now click a button to deploy v2.0 of the template. The template has defined service domains, which in this case, takes down two of the four web servers, pulls them out of the HLB, updates the web app, then adds them back into the HLB. After the first two are done, it takes down the next two, using the same process. 100% automated and no downtime!

Now let’s say v2.0 of the WebApp has some bugs, and you need to revert back to v1.0 until you can fix the issues. No problem! Click on the service template, and you see the full version history. With a single click you revert to v1.0, and it repeats the same deployment process of removing the VMs from the HLB, down-rev the app, and add them back to the HLB. Completely automated roll-back, without service interruption.

And remember…service templates are hypervisor agnostic, so VMware shops get all of these service template features. Since this is an automated process, it minimizes human errors, limits configuration drift, and orchestrates the updates in a HA manner. What will be really exciting is when more products are supported, such as Exchange, so you can roll out service updates in a similarly automated fashion. Very slick indeed.

Service Templates support Windows Server 2003 R2 SP2 and later VMs, and SQL 2008 R2 (not SQL 2008 since it’s not sysprep aware). As Microsoft gets onboard with Server App-V, those applications will be fully supported as well.

SIM209: System Center Service Manager: Automating ITIL and MOF

This session covered how System Center Service Manager 2010 really integrates many of the System Center components to allow an organization to follow industry best practices, such as ITIL and MOF. Without an integrated solution, it’s really hard automate these processes to reduce operational expenses and meet defined SLAs or customer expectations. Later in the week there was a Service Manager 2012 sneak peak session, which showed even more integration points such as VMM 2012 and Orchestrator 2012. I’ll cover that session in another blog post.

Highlights of this session include:

  1. Service Manager is the power of integration. It brings together Operations Manager, Configuration Manager, Active Directory, and Opalis (renamed Orchestrator). On top of these products it layers a single configuration management DB (CMDB), workflows, a portal, data warehouse, and forms.
  2. SM supports many processes, but not the entire stack found in ITIL or MOF. The processes it does support include risk management, compliance management, service asset & configuration management, change management, knowledge management, incident management, problem management, event management, request fulfillment, and service level management.
  3. First up is the Configuration Management process. The SM connector framework to AD, SCCM, SCOM allows automatic identification of configuration items (CI) in the environment. Regular synchronization makes sure the data (CMDB) is up to date. An audit trail is maintained for each CI.
  4. Second is up is Incident Management.  Incidents are ‘daily fires’ that the service desk puts out each day. Incidents are not part of the standard operation and need to be addressed as quickly as possible. Incidents can be opened automatically by SCOM, from DCM (Desired Configuration Manager) non-compliance, email from an end user, phone call, or the web portal. You can categorize incidents, assign an impact and urgency. Impact + urgency = priority. You can configure standard templates. Built-in links in the CMDB show related items. It also supports knowledge articles (provided by MS or custom entries), incident tasks, and capture of resolution information.
  5. Third up is Problem Management. Problem management deals with resolving the underlying cause of one or more incidents. The focus is to resolve root cause errors and find permanent solutions. SM can log problems against related CIs, and you can manually create problem records, or they can be automatically created. You can categorize and set priorities through various data fields, and see the impacted CIs. You can define and relate known errors, and define and relate knowledge base articles within SM. Integration with the incident management engine can close incidents when they are resolved, and integration with the change management module ensures that proper change processes are followed.
  6. Fourth up is Change Management. Change management records changes in the environment, affected services and computers, authorization to proceed, captures planning work, coordination of change implementation, and review of change completion. Details such as title and description are captures, including related CI items, and you can define change templates. CM contains activities, and you can review their current state. Planning changes are captured as well. Fields can capture scheduling details of the change.

The speaker went through a number of demonstrations of SM, and all of the integration modules. I thought it did a good job of showing the power of the integrated CMDB and how it can help you streamline and automate your processes. However, SM won’t magically create your processes. So if your organization has poor processes to start with, you won’t get as much out of this tool. You really need to clearly define processes, socialize them to all staff, then enforce the processes. That’s when the power of SM can really be seen. For a v1.0 product, I think MS took a good stab at the problem set. The enhancements to SM 2012 close some of the gaps in 2010, and make the product even better.

SIM307: Securing your Windows Platform

This session covered a number of free Microsoft tools that can be used to secure the Windows operating system, and to a lesser extent, applications. For the most part they presented tools that I was familiar with, and even have written a blog about (such as EMET). However, I did learn about a new tool that I think is pretty slick that your IA/security guys might really like.

The session started off with a brief background on security, then went into the specific tools and a few demos. Highlights of this session included:

  1. Active Directory compromise is BAD. 100% cleanup assurance is extremely difficult, if not nearly impossible. Rebuild is expensive and embarrassing.
  2. Malware is a profit driven industry and assume attackers are well funded and highly motivated. These are not just script kiddies trying to compromise a PC for fun.
  3. Sophisticated techniques are getting more efficient, obfuscation techniques are constantly evolving, and the number of malware variants exceeded 286 million in 2010.
  4. Attackers want to gain a beachhead, install malware, escalate their privileges, introduce redundant access into your environment, and exfiltrate data or other nefarious actions.
  5. They then presented a graphic showing the cost of defending a network and the return on benefit. For most organizations the optimal point is ‘commercial reasonability’, after which costs dramatically increase with diminishing security returns.
  6. If you aren’t even doing due diligence then you are really up the creek. This includes limiting domain admin privileges, limiting local administrator access, don’t allow internet browsing from administrative workstations, run 64-bit clients, patch, anti-virus software, and a firewall. In addition you should require two-factor authentication for administrators.
  7. A concept they introduced is the “trusted virtual machine client.” This is a client that is highly hardened and is what admins use to administrate the domain. Goals of this VM include lowing risks, prevent malware infections, limiting damage should the VM become compromised, and easy to use.
  8. This trusted admin VM should run Windows 7 x64, be joined to the domain, member of a hardened workstation OU, use the SSLF security profile, and NOT have browser access to the internet. Normal users should never login to these admin workstations. Only regular users should login to regular workstations. No server or domain admins allowed on regular workstations.
  9. You should have a concept of server admins, which is NOT a domain admin, and does NOT login to any clients. The account can only logon to authorized SERVERS.
  10. Next up is the first security tool, Security Compliance Manager (SCM). SCM lets you configure a security GPO baseline, maintain version control, then export to a GPO to use in your domain. Microsoft provides many baselines that you can copy and modify to fit your security requirements. It also has a lot of built-in knowledge to help you understand what the settings do. It can also work with SCCM’s DCM (desired configuration manager).
  11. Second up is EMET, the Enhanced Mitigation Experience Toolkit. A new version of EMET (v2.1) was just released a few days ago, that you can download here. EMET protects against unknown vulnerabilities, blocks entire classes of exploits, and is easy to install. In just the last year EMET mitigated several zero day Adobe and IE vulnerabilities..all before Adobe and Microsoft released patches. Unfortunately for enterprises there is no centralized control or native reporting. Enterprise enhancements are in the works, but no ETA.
  12. Applocker is a new feature in Windows 7 and Server 2008 R2 which can let you easily create whitelist and blacklists of applications. Unlike SRP (software restriction policies) in previous generations of OSes, Applocker is easy to configure, can automatically create rules, and is far more flexible.
  13. The last tool, which was new to me, is Attack Surface Analyzer (ASA). ASA identifies the changes in system state, runtime parameters, and securable objects in Windows. It’s part of Microsoft’s internal SDL (secure development lifecycle) process. Basically you execute the tool on a computer, and it will report any insecure findings such as weak ACLs on objects. You can also schedule snapshots of systems and do a historical comparison. You can download a beta here.

ASA is a great tool for analyzing golden images before you deploy them, or hosts in high risk environments like DMZs. It analyzes far more than just filesystem or registry ACLs, such as COM+ objects, named pipes, GAC assemblies, network shares, threads, handles, ports, and other deeply buried Windows features that you can’t possibly analyze manually. Microsoft uses it on 100% of all shipping products and any severity 1 findings prevent a product from shipping without a senior VP within MS granting a waiver.

Typical use scenario would be to run the tool on a virgin OS image (without any apps), then install all of your apps, then re-run the tool and look for any insecure settings that your applications created. You will probably see some false positives for weak ACLs involving the TrustedInstaller account. You can ignore those. Microsoft wanted to be transparent and not hide these findings, although I do think a check box to suppress the findings would be useful.

Fully supported platforms include Windows 7 and Server 2008 R2. You can do command line analysis and collection of Windows Vista and Server 2008 systems. Windows 8 and Server 2012 will require a new version of the tool.

Security is not just a matter of applying the right GPOs, installing anti-virus software, running a few tools, and calling it a day. Threats are constantly evolving, and policies and procedures are extremely important. User education is also critical, and often over looked. However, the tools covered in this session are a great start for hardening your base operating system.

SIM354: Systems Center Operations Manager 2012 Network Monitoring

This session focused on the new (and pretty robust IMHO) networking monitoring enhancements in OpsMgr 2012. Previously network monitoring in OpsMgr was very, very basic. So basic, that I suspect not many people really used OpsMgr to monitor network devices. That all changes in OpsMgr 2012, where the OpsMgr team developed their own SNMP (v1, v2, v3) discovery engine, and are working with network vendors to have an extensive list of certified devices. The most in-depth monitoring is for Cisco devices.

Highlights of this session include:

  1. OpsMgr 2012 supports network discovery, network monitoring,  visualization, and reporting.
  2. Key takeaway is that IT operations can now gain visibility into the network to reduce mean time to resolution.
  3. Out of the box it will include multi-vendor support (Cisco, Foundry, etc.), supports IPv4 and IPv6, and partners can also build on the platform to further extend the feature set.
  4. Network discovery finds things such as connectivity, VLAN membership, HSRP groups, server NIC discovery, port/interface details, processor details, memory.
  5. Network discovery can be explicit, or recursive (using ARP, IP topology, MIB).
  6. Discovery can run on demand, or on a scheduled basis. Some discoveries can be initiated by device traps.
  7. Network monitoring stats include up/down, volume of inbound/outbound traffic, % utilization, drop and broadcast rates, processor % utilization, in-depth memory counters for Cisco (including fragmentation), and free memory.
  8. You can monitor connection health (looks at both ends of the connection), VLAN health (based on switch status), and HSRP groups.
  9. Built-in are a number of dashboards including network summary, network node details, network interface details, and vicinity views.
  10. Built-in reports include memory utilization, processor utilization, port traffic volume, port error analysis, and port packet analysis.
  11. OpsMgr network monitoring is not meant to replace network engineer monitoring tools. Although, I think it is idealy suited for service desk level 1/2 to monitor the network. Combined with SharePoint  or Visio dashboards, you could do some really nifty real-time dashboards for a variety of groups in your organization.
  12. No MIB import support. Microsoft will certify devices and release device updates in cumulative updates and service packs. CUs are typically on a quarterly basis. They are working on a process for customers to request specific devices to get certified.
  13. No current support for NetFlow stats. It relies on snmp gets for performance counters. Netflow may be added in the future.
  14. One OpsMgr network management server can probably support approximately 500 devices. More testing will be performed and MS will come out with performance details closer to RTM.
  15. For 2,500 devices you can expect 15GB of additional storage required in your OpsMgr operational database, and about 100GB in your data warehouse for 1yr of data.
  16. No Fibre Channel or SAN switch support. May be added in future releases.
  17. OpsMgr only needs read-only SNMP access, NOT read/write.
  18. Currently there is no event correlation between a switch going down and the servers connected to it. All objects will alert in OpsMgr. Future releases may have a correlation engine to suppress alerts.

In short, even though Microsoft claims to do just basic network monitoring and this is a v1.0 module, I think it’s pretty darn full featured. Given their committment to support additional devices in their quartlerly CUs, that should help keep the module relevant and supporting the latest and greatest hardware.

For service desks, the out of the box functionality with a few custom dashaboards has the potential to eliminate the need for tools such as What’s up Gold or SolarWinds. Of course network engineers still need their heavy duty tools, but seriously consider the cool Visio/SharePoint/Dashboard features of OpsMgr combined with network monitoring.

VIR305: Creating App-V Packages with App-V 4.6 SP1

Application virtualization is one of the “new” areas of virtualization I’m very excited about. App-V has been around for several years, in various forms, formerly known as SoftGrid. Server App-V is just around the corner, and I think more exciting than client virtualization, but that’s a topic for another blog post. App-V basically wraps up an application into a self-contained package, which you can deploy via various means.

Unlike a traditional software package, the application is not directly installed into the client. There’s a level of abstraction between the software and underlying operating system. This let’s you, for example, have multiple versions of Microsoft Office or Java, on your computer without conflicts. App-V is one example of SaaS (Software as a Service). Other benefits include centralized servicing, centralized patching, tigh version conrol, and better software metering. You may of also heard of VMware ThinApp, which is similar in concept.

This session covers the enhancements made in SP1 of App-V 4.6. These enhancements include:

  1. No 1990s era 8.3 file name restriction for directory paths. Yipppee!
  2. New sequence diagnostics now proactively warn you of potential problems with a software package before you get to the end, start testing, and find out something is broke. Issues like having anti-virus running, software that includes device drivers, or other issues that make application virtualization harder.
  3. An XML report is generated with each package showing all of the diagnostic data, so if you do run into issues, you have documentation about the packaging process to help you troubleshoot the issue. Report includes excluded files, drivers, COM+ objects, system differences, SxS conflicts and shell extensions.
  4. Diagnostic alerts include pending reboots, VM not reverted, services enabled like defender or SMS, etc.
  5. Dynamic Suite Composition (DSC) allows you to more flexibly package several applications or components (such as plug-ins or middleware) so the suite of software works properly. Office plug-ins are very common.
  6. The major news about SP1 are package accelerators. Previously everyone that wanted to package up an application, say Office 2010, had to go through a somewhat lengthy and tedious process. No more! With package accelerators you point App-V to the install files, the accelerator files, click a few times, and viola, out the other end is a sequenced application.
  7. Depending on the application, there may be a little more work required to use the accelerator, but it still eliminates a vast majority of the trial and error associated with sequencing. Microsoft, third party vendors, and the community can create and release accelerators. Ones for Office 2010 and some Adobe products already exist.
  8. Project templates let you pre-populate sequencer GUI settings, or access settings that you can’t via command line automation.
  9. New CLI package optimization features let you launch all short cuts, control timeouts, and other features.

App-V 4.6 SP1 is already out, released back in March 2011. So you don’t have to wait to use the new time saving features such as package accelerators. You can download a lot of them from here. If you haven’t looked at application virtualization, you really should. Physically installing applications on clients these days is so yesterday. LOL. Now not all apps can be virtualized, but many, many can.

FYI, SCCM 2012 is tightly coupled with App-V, and has great in-depth support. So if you are a Microsoft shop, have SCCM, then you really need to look at App-V. Other application virtualization platforms just won’t have the level of integration that you may want. See my SCCM 2012 posts for more details.  Even if you don’t use SCCM, App-V still can provide you some significant benefits. VDI environments almost require App-V, if you want to follow best practices and maintain very clean/minimal base images.

SIM361: System Center VMM 2012: OSD, OOB and Agent Management

So this is the second part of SIM361, which actually covers the content in the official session title. The last half of the session focuses on the bare metal deployment and automatic cluster creation process for Hyper-V. So if you are a VMware or XenServer, you can skip the rest of the content. vSphere 5.0 will have a bare metal deployment appliance, so go read up on that. 🙂 But for you Hyper-V users, keep reading.

Hyper-V host lifecycle management features include:

  1. Full control of the bare metal using baseboard management controller (BMC) (e.g. DRAC, iLO, etc.). Supports discovery of basic hardware inventory such as SMBIOS GUID, model, asset tag, serial number, etc.). Also can control power states such as power on and power off.
  2. Supports IPMI, DCMI, and WS-MAN interfaces to BMC devices. This interface is extensible.
  3. Provision Hyper-V onto a bare metal machine.
  4. Fully automated hyper-v cluster creation.
  5. Leverages a VMM server, WDS server, and a library server. Co-exists very well with an existing WDS server (but requires Server 2008 R2 WDS). Dynamic driver injection support as well.
  6. Deploys a VHD to the bare metal, meaning the server permanently boots off the VHD not a traditional disk partition.
  7. Automates IPing, domain join, role/feature installation, computer naming, etc.

So there you have it…..deploy Hyper-V hosts directly from VMM 2012 with just a few clicks of a mouse.

SIM361: System Center VMM 2012: VMware and XenServer Support Features

I’ve broken this SCVMM 2012 session into two blog posts, since half of the content wasn’t directly related to Microsoft’s session title so you might overlook this great info about VMware and XenServer support in VMM 2012.

It covered the enhanced integration of VMware and XenServer within VMM 2012 and the changes from the previous version (SCVMM 2008 R2 SP1). New to SCVMM 2012 is full support for ESX 4.1 and XenServer. The previous version has so little VMware support, that IMHO, it was practically useless. In fact the speaker asked the audience who uses VMM to manage their VMware environment and no one raised their hand. Ouch! No previous XenServer support.

VMM 2012 has a virtualization abstraction layer that allows VMM to use a common interface, yet interface with various hypervisors. For example, the same powershell command to live migrate a VM will work on Hyper-V, ESX 4.1, or XenServer. It is very likely datacenters will have more than one hypervisor, so this is a nice common point of administration for supported operations. VMM 2012 now has over 460 powershell commandlets, up from 160 in the previous release.

VMware support enhancements include:

  1. Import ESX hosts and clusters and put them into any folder structure in VMM 2012, unlike previous versions that did a one time static import of your datacenter object tree.
  2. Discovers standard and distributed port groups, and virtual switches.
  3. VM templates are also discovered, and it imports the metadata about the VM template but does not touch nor ever delete the VM template (unlike previous versions).
  4. VM workflow now uses vCenter to do the VM copy (which means it could leverage VAAI).
  5. Thin provisioned VM templates are supported.
  6. You can’t create FT VMs (but who really uses those anyway given all of the limitations).
  7. vMotion and Storage vMotion are supported.
  8. Utilizes the native VMware HTTPS interface to vCenter (seems like a no brainer but the previous version did NOT.)
  9. No requirement to enable root SSH on ESX servers (seriously, what was MS thinking when they required this?)

XenServer support includes:

  1. No dependency on XenCenter. VMM 2012 directly talks to each XenServer.
  2. Like VMware, you configure the XenServer host outside of VMM. When it’s fully configured, then you add it to VMM.
  3. It supports standalone and pooled hosts.
  4. Can enable maintenance mode (like it can with VMware), and shutdown, power on, and restart the server.
  5. Supports iSCSI, NFS, HBA, and StorageLink disk types, both shared and local.
  6. Supports ISO repositories, although they must be read-write (not read-only).
  7. Due to differences in how vSwitches work, VMM wrapps a single vSwitch around all of the XenServer vSwitches that get created when you use multiple VLANs.
  8. Full VM support, both paravirtual and hardware VMs. Does leverage checkpoints (snapshots) and gues console access.
  9. No Dynamic memory support.

It appears to me there are very few caveats regarding features not supporting in both hypervisors. That’s not to say VMM will replace vCenter. It was clearly stated it will not, and vCenter will still be used to manage your ESX servers. However, many of the very common daily tasks that you do in vCenter can now be done in VMM 2012. There’s no tie into VUM, for example, so any host patching and maintenance will still be vCenter only. But for the purposes of building and managing a private cloud, the level of support goes very deep. Bravo VMM team (just please rename it to something like Cloud Manager).

Now the million dollar question is, how soon will MS support vSphere 5.0 when it’s GA?