Archives for October 2009

New Microsoft Security Guides for Windows 7 and IE8!

Earlier this year I was honored to be asked by Microsoft to join a private alpha of their security accelerators for Windows 7, IE8, and Server 2008 R2. I provided a lot of feedback to the guides, and as a result several tweaks and changes were made. As a result I made it into the contributors and reviews on page 6 of the Windows 7 and IE 8 security guides. Not sure why Server 2008 R2 isn’t up yet, but I’d expect it soon.

At my previous job at PointBridge I was asked to provide feedback to the Windows Server 2008 security guide, so my name found its way into that guide as well. Microsoft really does listen to feedback, and takes comments seriously. Hopefully I can participate in future security guides.

You can download the brand new Windows 7 and IE 8 compliance management toolkits from here. If you are affiliated with the DoD, these security guides received input from DISA, NIST and NSA. I don’t know when official FDCC documents will be published, but I’m in contact with the lead program manager on that effort and will post any ETAs as I find them out.

More 3PAR announcements!

Wow 3PAR is really on a role of new announcements for upcoming versions of the InServe OS. The official 3PAR announcement can be seen here.

Highlights Include:
– Federated multiple InServe arrays so they appear as one large storage cluster. The various arrays could be miles apart.

– Persistent cache, where if your disk array has four controllers and the cache memory fails in one controller, performance is maintained in the other controllers.

– Multi-site synchronous data replication

– Support for RAID-MP (multi-parity). Essentially RAID-6, with a less than 15% performance hit over RAID-10. Support for triple and quadruple parity in the future.

VMware Workstation 7 released!!

If you are a VMware workstation 6.x user, workstation 7 is a must have upgrade. You can read more about it here. In a blog earlier this month I listed many of the new features. Check it out! Aero Glass for Windows 7 VMs is really killer.

Virtualize ESX inside VMware Workstation

If you want to experiment with ESX(i) on your home computer and have VMware Workstation 6.5, it’s pretty easy to do. Two key steps are needed in order to be successful.

1. When creating a new VM inside Workstation for ESX(i), chose Red Hat Enterprise Linux 4 64-bit.

2. After you have created your new VM in which ESX(i) will live, two additions are needed to the associated VMX file:

monitor.virtual_exec = hardware
monitor_control.restrict_backdoor = TRUE

After you make these changes, install ESX(i) from the VMware installable media. Please note you can only power on 32-bit guest VMs inside of ESX(i), since VMware Workstation 6 does not virtualize the CPU’s virtual extensions. Depending on what you want to do inside ESX(i), the inability to run 64-bit VMs could be a show stopper. Of course you can run 64-bit VMs in a Workstation VM, but that’s not the same environment as ESX(i).

SharePoint 2010 rocks!

I’ll be the first to admit, my experience with SharePoint has been from the end user. But, if you are a SharePoint developer, check out the great blogs at PointBridge. This week in particular, their blogs have been smoking with wazoo (technical term) goodies for developers. 90% of it is greek to me, but I’m sure SharePoint geeks will appreciate all their great information.

In case you missed it, this week was the Microsoft SharePoint conference in Las Vegas. Over 4,000 people attended! PointBridge took top honors in the Multi-Capability Solution Award for the SharePoint Partner Award Category at the Microsoft SharePoint Conference 2009. You can check out the press release here.

Are your fabrics converging to FCoE?

If you are planning a major datacenter upgrade in 2010 or beyond, you won’t be able to escape vendors pushing their converged fabric products. FCoE is being adopted by all major server and network manufactures at a rapid clip.

I think vendors like Brocade will have a tough time in the market, even though they have their own converged solutions. In fact, there are rumors Brocade has put itself up for sale and HP or Juniper might be in the market for an acquistion. That could help counter Cisco’s recent entrance into the server market, where they are the 100 ton Goliath in the networking world.

2010 will be a very interesting year, as the economy wakes up, virtualization increases, blade server manufactures compete tooth and nail, and the converged fabric starts to gain traction. If you are a datacenter architect, you have your plate full over the next 12 – 18 months. If you haven’t yet thought about FCoE and converging your fabrics, start paying attention now.

If you are currently a Cisco networking shop, then it’s very likely your network engineers will recommend adopting the Cisco Nexus line of switches which support FCoE. If your storage team is using Brocade or other SAN switches, some training will be in order to migrate to the Nexus switches.

Start planning today for 10Gb Ethernet and get your cable plant ready for the next generation of interconnects. This may be a bumpy ride, so buckle up!

Who’s your blade server daddy?

A couple of weeks ago Gartner released their October 2009 magic Quadrant for blade server report. If you have been tracking blade servers, you know the market has been rapidly adopting new technology due to the tidal wave of server virtualization. 10Gb Ethernet, 8Gb Fibre Channel, 10Gb FCoE, and 4x Infiniband have all come to market in 2009 in a strong way.

This summer Cisco, for the first time, has entered the server market with their UCS portfolio. So I was interested in seeing what Gartner had to say about the existing market and what impact Cisco will have.

First, a few factoids from the Gartner report:

  • HP is the market leader, by a long shot. They ship more blade servers than all other vendors combined.
  • Only three vendors are in the upper right quadrant (most complete vision, best ability to execute): HP (leader), followed by IBM then Dell.
  • Two vendors (HP and IBM) control 70% of the blade server market.
  • Cisco’s entrance to the market is ‘potentially highly disruptive and represents the transition toward fabric-based technology convergence.’
  • IBM ships more than twice as many blade servers as Dell.
  • IBM saw a sharp reduction in market share in 2007 and 2008 and is focusing on recovery.
  • Survival prospects for Sun blades are ‘good’ but market uncertainty with the Oracle take-over creates doubt.

Bottom line is that the market is highly competitive. Both IBM and Cisco are rapidly advancing towards a converged fabric. HP is the clear leader, but their competition is not sitting idle. In fact, HP is lagging their competition in shipping FCoE enabled products. However, various standards required for FCoE were only recently ratified. I see 2010 as the year that businesses start to seriously look at converged storage and network fabrics. By then all major blade server manufacturers will have FCoE products, including HP.

In short, if you are seriously looking at blade servers then your three best bets today are HP, IBM, and either Cisco or Dell. If I had to bet money, once the economy recovers I would guess Cisco blades will see strong market acceptance. However, they don’t have the decades of server experience that HP and IBM have and rely on partners for a complete solution.

It will be interesting to see the blade server market share and magic quadrant in the second half of 2010. Will HP still be leading? Will Cisco UCS take off or fall flat? Will Sun even be around? How strongly is Dell committed to blades? Let’s check back in a year..all of us may be in for surprise.

SAN Zoning best practices – Be prepared for bunnies!

Today in my vSphere class a question came up about zoning a fibre channel SAN. Best practices from most storage vendors and security specialists recommends WWN single initiator/single target zoning. What the heck does that mean? First, it means a lot of zones–they multiply like bunnies!

An initiator is your HBA in a server. It always reaches out to your array and asks for data. It may have one or two ports, or more. The target is the storage processor (host port) on your disk array. It is the target of SCSI requests and executes commands it receives. In a typical SAN your server will have at least two HBA ports, and your storage array will have four or more. Most mid-range arrays will have four ports, like the HP EVA. High-end arrays like EMC Symmetrix, HDS USP and 3PAR T series support many more host facing ports.

Let’s picture zoning this way. You server HBA has ports H1 and H2. Your storage array has ports S1, S2, S3 and S4. S1 and S2 are on one controller, and S3 and S4 are on the second controller. Ports H1, S1 and S3 are in one physical fabric and ports H2, S2 and S4 are in a second independent fabric.

For this situation we would have four zones:

H1 and S1
H1 and S3
H2 and S2
H2 and S4

Each zone has a single initiator and single target WWN. If your storage array has eight ports, then you’d end up with eight zones if all LUNs were presented through all eight ports. Why is this better than one big zone? First, there’s a Fibre Channel command called a RSCN. Registered State Change Notifications are a disruptive event which are sent to a fabric when changes happen.

Changes can be a new HBA logging in to the fabric, a device removed from the fabric, or other scenarios. RSCNs are disruptive and can interrupt in-flight I/Os. RSCNs should always be minimized and isolated. Period. In recent years switch vendors, like Brocade, have been very good at minimizing the number of devices which receive RSCNs. In the early days of Fibre Channel there was a potential to see a lot of RSCNs in a dynamic fabric, sometimes called RSCN storms.

If you have a single zone for all your storage devices, all hosts are impacted by RSCNs which can lead to fabric instability, poor performance, or other mysterious problems. Single initiator/single target zoning confines the RSCNs to only the devices that need to know about the state change.

Second, on the security front, you don’t want devices communicating with anything on the fabric that they aren’t supposed to. Does server A need to send traffic to server B? Absolutely not. Server A and B only talk to the storage array. What if server A was compromised, you don’t really want any communications path to other devices in the fabric. It could launch a DOS attack, or other nastiness. Think of zones as individual DMZs, or VLANs, where you tightly control what can talk to what.

If you have a SAN attached tape library, that requires even more zoning. If we continue my example of server A, if it needs to communicate with a tape library that has two FC ports, that will require two more zones.

Don’t get zoning confused with LUN masking/presentation. Once a server is zoned to storage array ports, you can present any number of LUNs without altering the zoning. Adding a new LUN? No problem. No zoning changes are needed since the server can already communicate to the storage array.

Yes you will need to update your LUN masking to allow hosts to access the new LUN, but that’s completely independent of any zoning. The primary exception to this that I can think of, is if your storage array has a boatload of host facing ports and you only present specific LUNs to specific storage ports. If you server isn’t zoned to all ports, then you may need to add zones if you want to present a LUN through ‘new’ ports the server can’t yet see.

Personally in most situations I’d zone all server HBA ports to all disk array ports in their respective fabrics, using the single initiator/single target model. Exceptions would be very large SANs where your storage administrator reserves specific ports on the storage array for certain kinds of hosts. Maybe some storage ports only support ESX hosts, and other are for physical Unix hosts. In this case, zone a server for all storage ports that are appropriate, say all ESX ports if it will be an ESX host. I would limit zoning a server to no more than eight storage array ports, with four being much more common. Do you really need more than eight paths to your disk array? Highly doubtful.

Our 3PAR T400 array has eight host facing ports, so each of our servers has eight zones just for the disk array LUNs. Add additional zones for our Fibre Channel physical tape libraries and our FalconStor VTL. Zones can multiply like rabbits if you have a lot of devices. Having a good naming standard for zones is key, plus a complete set of documentation.

Switches have a maximum number of zones they support, so consult your switch vendor. Generally these numbers are in the thousands, so all but the largest environments won’t run out of zoning storage space in the switches. Brocade Fabric OS 6.0 supports 11,866 single initiator/single target zones using WWNs.

Brocade has a good whitepaper on Secure SAN Zoning best practices, you can see here. Brocade calls this model of zoning “Single initiator Zoning (SIZ)”.

Don’t partition your ESX LUNs!

Under some circumstances you may feel the need to partition an ESX LUN, so you can create two or more datastores. Why? Let’s say you have a small branch office with a single ESX host and it’s only using internal storage from a RAID controller. And for some reason you need two datastores. Maybe it’s to meet security requirements, or data separation requirements. Maybe you can’t store your VMs and templates on the same datastore because it’s against security policy.

Given these circumstances, you may be attempted to use traditional disk partitions and then format each with VMFS to get your two or more datastores. Bzzzt! Under no circumstances does VMware support a single LUN which is partitioned and has two or more datastores. Why? It’s pretty simple actually. ESX utilizes SCSI reservations to lock a LUN and update disk metadata. During this lock, VMs can’t write to the disk. If you have two or more datastores on a single LUN, the reservations can step on each other and create problems. Expect problems!

Don’t try and be creative and pre-partition a LUN prior to installing ESX. Also, don’t try to add a new datastore and not allocate all of the capacity in the hopes you can add another datastore in the free space. Impossible.

Solutions? Well, if your server has multiple drives then configure independent RAID arrays. For example, if your server has four drives, create two RAID-1 mirrors. This gives you two LUNs, and two datastores. Alternatively, use external shared disk storage which supports multiple datastores like NFS or iSCSI.

If you have two hosts, then you could use something like the HP LeftHand iSCSI virtual storage appliance which mirrors storage between the two nodes and then present multiple iSCSI datastores to each ESX host. This could have other uses as well, such as disaster recovery, as the Lefthand VSA supports VMware Site Recovery Manager (SRM) and remote data replication.

Stay thin with 3PAR! An easy diet for your storage array.

One of the problems with modern disk arrays that support thin provisioning is that volumes tend to get fat over time, just like us Americans! Instead of putting yourself on a diet, you can put your storage array on a diet. Think you will need to haul your array to the gym and shove it on the treadmill? Nope! 3PAR announced today a set of technologies that help keep thin provisioned volumes thin, even as data is written and deleted over time.

In a traditional thin provisioned enabled array if you have a 2TB LUN and write 1TB of data, then delete 800GB, that 800GB is still allocated to that volume…stranded. Over time that LUN can get fatter, depending on how data is written and deleted. If you have snapshots of those volumes, they can get fat too. Fat everywhere!

You can check out the announcement at 3PAR regarding their thin conversion, thin persistence, and thin copy reclimation. They have an interesting partnership with Symantec and their Veritas Storage Foundation product. As you delete files the volume manager communicates with the storage array telling it what blocks have been deleted and can go back into the shared pool. Now you can get your 800GB of space back!

This is, in principal, similar to the Windows 7/Server 2008 R2 ‘trim’ command which tells your SSD drive which blocks have been deleted so it can do housekeeping and keep your SSD optimized. Maybe 3PAR can tie their thin engine into the TRIM command and bypass the Veritas volume manager?

For customers with a highly virtualized infrastructure, using ESX, I hope 3PAR and VMware can integrate their APIs so that thin reclimation works for VMs stored on VMFS volumes. ESX 4.0 supports changed block tracking, so I don’t see it as a huge stretch to integrate ESX with the 3PAR thin reclimation technology just like they did with the Veritas volume manager. *Cross fingers*

Network world has a good two part write-up that explains some of the details very well. Part 2 is here.