Archives for July 2015

Nutanix Acropolis 101: Creating VMs

Now that Nutanix Acropolis is shipping with NOS 4.1.3, I wanted to cover a few basics in a series of blog posts. Most of you won’t be familiar with Acropolis, which is based on KVM. If you’ve used KVM in the past, you will know how difficult it can be. Acropolis removes much of the complexity and headache. So for my first installment, I wanted to do something very basic: Upload an ISO to the Nutanix cluster which I could use to image a VM, then create a VM and install the OS from the ISO.

Please take note that the Acropolis road map is quite detailed with many major enhancements in both terms of features and ease of use. So in a few months this procedure will probably change, particularly around ISO management. But 4.1.3 is shipping today, so here’s what you need to do. This assumes you have a running  Nutanix cluster imaged with Acropolis/KVM.

1. From your Windows machine launch WinSCP. Enter the hostname/IP of your Acropolis cluster, change the port to 2222, then use a username of ‘admin’ and your cluster password. WinSCP should then connect to NDFS.

2. Once WinSCP connects, in the right pane you will see a list of containers. If no containers exist, then you have a brand new cluster and need to go into PRISM and create a storage pool and container. Refreshing WinSCP may not reveal the new container(s), so you may need to reconnect to see them. I went into my first container and created a directory called “ISO”.

3. Next, in the left pane of WinSCP navigate to your ISO(s) and drag them into the right pane. Wait for the upload process to complete.

Note: In NOS 4.5 we will be adding a PRISM ISO upload option, so you won’t need to resort to WinSCP for uploading your ISOs. There’s also some cool feature in the future around ISO libraries that is in the works.

4. Login into PRISM (the Nutanix GUI not the NSA spy tool). From the left pull-down menu select “VM”.


5.  From the left-side menu, select Create VM:


6. Next up we need to define some VM parameters, such as memory and CPU. Fill in the form as appropriate for your VM.


7. Next, click on New Disk. The dialog box below opens up. Assuming you’ve already created a container, just enter the size of the disk in GB. All disks are thin provisioned, so no need to select various format types.

2015-07-14_9-16-09 8. After the disk is created, we now need to add a NIC. If this is the first VM you have created, and haven’t yet defined any networks, we need to define a network to attach the VM to. You can of course have different networks, each associated with a different VLAN.  Click on “New NIC”, then you will see the box below.


Because we have no networks, click on “Add Network”. Add the relevant VLAN ID and close the dialog box. Note, in NOS 4.5 you will be able to define network names, so you won’t have to rely on remembering long UUIDs.

9. Now, if you followed my article to inject the VirtIO drivers into your Windows ISO, then configure the CD-ROM to mount your ISO image from the NDFS datastore. If you have a virgin Windows ISO image and didn’t inject the drivers, then you can mount a second CD-ROM to the VM. In this second CD-ROM mount your VirtIO ISO image.

10. Power on the VM, and wait for Windows to boot. When you get to the “Where do you want to install Windows?” screen you have two option. First, if you created your custom Windows ISO then your boot disk should be listed and you can proceed as normal. If you are relying on the second CD-ROM, then click on Load Driver and browse to the vioscsi driver for the appropriate operating system. Your boot disk should now appear.


11. Proceed with the Windows installation as normal. Once Windows is fully installed, login as administrator.

12. Open the Device Manager, and you will see a NIC under Other Devices with a yellow exclamation point. Right click and update driver software. On the VirtIO CD navigate to the NetKVM folder and appropriate OS. You NIC will now be detected.


13. Congratulations! You now have a fully functional Acropolis Windows VM. Unlike other hypervisors, you don’t need to install any additional drivers..just SCSI and NIC. Keyboard, video and mouse are all native and should work as expected.

Nutanix NOS 4.1.4 Features

If you are Nutanix customer, you know that we release new version of our NOS platform on a very frequent basis. Release timing varies, but every 2-3 months you will see releases pop up. Some are major with a boatload of new features, and some are more bug fixes with a few minor features. I’m proud to announce that NOS 4.1.4 is now shipping! This is a minor update, but does have the following features: In NOS 4.1.x metro availability is synchronous replication between two sites, with no more 5ms latency between sites. NOS 4.1.4 brings some enhancements to metro availability:

  • Ability to take snapshots of a metro protection domain
  • Snapshot creation is driven from primary and performed on both primary and secondary
  • Only protection domain metadata replication is performed to complete the snapshot (minimal data transfer)
  • Interoperability with conventional async tertiary site.
  • User can create a schedule to replicate to async site from GUI
  • Metro configuration is a starting point for 3-site DR, with tertiary replication as an add-on

2015-07-20_13-11-34 Over the last year Nutanix has released a number of new models, to satisfy certain requirements such as compute heavy, cold storage IOPS heavy, all SSD (AFA), etc. We don’t stand still, and constantly listen to customers. So with NOS 4.1.4 will debut support for the Nutanix NX-1065s.

  • Replacement/Upgrade from NX-1020
  • Single CPU Socket (E-2630v2 or E5-2680v2)
  • Ivy Bridge CPU
  • 3.5″ 2TB, 4TB, 6TB drives
  • SSDs can be 480GB, 800GB or 1.2TB
  • Supports self-encrypting drive
  • Up to 256GB DDR3 RAM
  • 2x 10Gb NICs (or 2x 1Gbps)
  • SATADOM upgraded to 6Gbps

2015-07-20_13-58-59-aThere is also an enhance to our Acropolis hypervisor.  When an Acropolis host enters maintenance mode, VMs are moved to a temporary host. After the host exits maintenance mode, the VMs are automatically returned to the original host, eliminating the need to manually move them. Restore VM Locality also occurs when a failed node is restored on a cluster that was configured with Best Effort High Availability (HA).

This version also includes a number of security updates, to address several CVEs, such as TLS issues, DoS, and memory corruption.

NOS 4.1.3 and later support the NX-6035c platform mixed with other blocks in a cluster running the Acropolis, Hyper-V, and ESXi hypervisors.

Nutanix Engineering has significantly improved the performance of the disk firmware upgrade process, so you might observe better results when upgrading

And there you have it! Enhanced Metro availability, a new low-end block, enhancements to Acropolis and other new features. You can download 4.1.4 directly from the support portal. Not to far around the corner is a MAJOR NOS upgrade, with a list as long as my arm of new features. I won’t spill the beans, but stay tuned for some really cool enhancements. Also expect another minor release in August, with yet more features. And for those of you wanting full ‘legacy’ Microsoft clusters on Nutanix, stay tuned for good news on that front.

Injecting KVM VirtIO Drivers into Windows

When dealing with hypervisors it is not uncommon to be required to supply specific drivers in order to recognize the virtual hardware, such as NICs and SCSI controllers. A while back I wrote blog articles on how to inject VMware drivers (PVSCSI and VMXNET3) into a Windows Server 2008 R2, Windows 7 image and Windows Server 2012. You can check out those articles here and here. But if you use the Nutanix Acropolis hypervisor (based on KVM), you will need a different set of drivers.

This article will show you how to inject the VirtIO drivers into your Windows Server 2012 R2 ISO, so that it will recognize the virtual KVM hardware. Do keep in mind that in the future Nutanix will be redistributing the Fedora VirtIO drivers, after we get them WHQL signed by Microsoft. So while this article uses unsigned Fedora drivers, in the future you can use fully signed and supported drivers. Stay tuned for that release!

The process below injects the required drivers into the Windows Server 2012 R2 installation boot files, and the actual Window Server operating system, for a fully KVM aware image. The drivers include Balloon, NetKVM, serial, rng, SCSI, and stor.

1. Download the Windows 8 ADK (Assessment and Deployment Kit) from here. Never mind that it says Windows 8, as it will work with Windows Server 2012 R2.

2. Start the installation process and after a long download select the two options below (Deployment Tools and Windows Preinstallation Environment (Windows PE)). WinPE is optional, but in case you need it in the future, I’d install it anyway. If you are in a hurry and won’t ever use WinPE, just select Deployment tools.

3. Mount the Windows Server 2012 R2 ISO. Navigate to the Sources directory and copy boot.wim and install.wim to your computer, say on the C: drive under C:\WIM.

4. Download the Fedora VirtIO Drivers from here, or when they are released, the Microsoft-signed Nutanix Acropolis driver bundle. Fedora packages the drivers as an ISO, so mount that ISO to your VM. I’m using the Z drive for my CD-ROM.

5. Create a folder on the C: drive called Mount. This will be the WIM mounting target.

6. Depending on your Windows Server 2012 R2 ISO image, it may have varying amounts of images included in the WIM. The VL ISO I have contains four indexes, or images. You can list the indexes with the following command:

dism /get-wiminfo /wimfile:C:\WIM\install.wim

2015-07-17_13-22-44Decide which index you want to inject the drivers in. Open the provided batch file found here and modify the IDX variable as needed. You could run the script multiple times and do all indexed images, if you wish.

7. Run the batch file and wait for it to complete. It should take a few minutes, depending on the speed of your disks. Make sure you monitor the output for any errors, in case you messed up the paths to the files.


8. Now you can re-build the Windows ISO with the updated WIM files, and you are set. Just create a VM shell on Nutanix Acropolis, then mount the updated ISO, and it will be all set for a smooth installation. If you don’t have an ISO building tool, I recommend UltraISO. It’s not free, but I’ve exceptionally good luck with it for many years.

Download the batch file here.

vSphere 6 Hardening Guide GA

During much of my career, I’ve been in the Government space and had to implement DISA STIGs for a variety of products including hypervisors. If you are a VMware customer and plan on using vSphere 6.0, you will be pleased to know that the vSphere 6.0 hardening guide is now GA. Some big changes were made in this version versus previous versions, so it should be more usable. You can find the full VMware blog post here.

I never saw this before, but VMware has a great landing page for security guides. From this page you can download a variety of guides and spreadsheets, very easily. That landing page is here.

What I’d really like to see from VMware is the majority of the security settings baked into the hypervisor with automated reporting. It can take weeks or months of STIG testing to get all of the settings right, run reports, etc. I hope that VMware will make hardening the hypervisor even easier, and take away much of the pain.

VMTurbo in the Cloud is here

The SaaS market is becoming very popular, and software that was once only on-prem is now migrating to the cloud. I’m excited to announce that with VMTurbo 5.2, it is now offered as SaaS deployment through AWS. This means you can now control your on-prem environment with VMTurbo in the cloud. That sounds like a great combination to me. VMTurbo claims deployment is less than 3 minutes in AWS.

And better then deploying it in 3 minutes, is that for a limited time it is completely free. AWS will still charge you for running the VM, but the VMTurbo license is free. You can check out their full blog post about it here.

I also have it on good authority that an Azure SaaS option is in the works, but not quite ready for GA. So if you are an Azure customer and love VMTurbo, just hold on a bit longer and you will also have a solid deployment option.

On a side note, VMTurbo is also a strong partner with Nutanix. And in fact, a version of VMTurbo that has deep Nutanix support is in early adopter (EA) phase. GA of the Nutanix-aware version is due out in August 2015. So if you are a Nutanix customer and use AWS, shortly you can control your Nutanix clusters from the cloud! Read all about it here.