Archives for February 2009

Storage Questions Part 1

The project I’m supporting is looking at ‘mid range’ enterprise Fibre Channel storage arrays. As part of the investigation, I perform market surveys of leading vendors. After we narrow down the field, I submit a long list of technical questions. While vendor’s web sites make their array sound like the best thing since sliced bread, the devil is in the details. Often you will uncover technical limitations which may knock the vendor out of the running.

Since my list of questions is pretty lengthy, I’ll break them up into several installments. Of course depending on your requirements, there will likely be additional questions. I broke the questions into various technical areas to help organize them.

1. Controller Details
a. What is the controller architecture? (Active/active concurrent, Active/active non-concurrent, Active/passive, etc.)
b. What is your controller to controller connectivity? (Full cross-bar switch, Full mesh, point to point, other)
c. Min/max controllers?
d. Describe the controller hardware (Standard server with custom software, custom hardware, proprietary ASICs, etc.)?
e. Maximum number of hosts?
f. Cache size and is it mirrored and ECC protected?
g. Minimum/maximum number disks?
h. Maximum storage capacity?
i. Disk capacity, interface speed, spindle speed, and types (FC, SATA, SAS, FATA)?
j. Is there a built-in UPS and what is the run time?
k. If FCAL is used on the back end, are both loops concurrently transferring data or is one loop active and the other standby?

2. LUN Management
a. Minimum/maximum LUN size?
b. Maximum number of LUNs?
c. LUN size increments?
d. If a LUN is presented through four storage ports, how does that count against the overall LUN limit (decrement by one or four)?
e. Dynamic LUN expansion/shrink?
f. Is LUN concatenation required for LUN expansion or large LUNs?
g. Maximum LUNs per storage controller host facing port?

3. Connectivity
a. What are current connectivity options? (4Gb/8Gb FC, 1Gb/10Gb iSCSI, FCoE, NAS, etc.)
b. Minimum and maximum number of host facing ports and type?
c. Can connectivity be easily upgraded to future technology?
d. Any near-term plans for more connectivity options or speeds?
e. Do host ports need to be dedicated to remote replication?

Virtualized Networks – HP or Cisco?

I’m working on a server consolidation and technology refresh project which is pretty exciting. We are taking dozens of legacy rack mount servers and consolidating them into blade servers using VMware ESX server. Features like VMmotion and site recovery manager will likely make it into the architecture as well.

This week I was doing a lot of research into various blade components, and how best to integrate them with VMware and the SAN/network infrastructure. While I haven’t settled on any particular architecture, I thought I’d share some thoughts and links on the matter. The leading blade servers are the HP BladeSystem, IMHO. IBM BladeCenter comes in a close second, but given our good experience with Proliant servers and HP’s push with 10-Gigabit Ethernet, staying with HP wasn’t a hard decision.

The major question I’ve been struggling with this week is how best to manage the ethernet network in an ESX environment. The network team is complaining with the current architecture they have little to no visibility into VMs and their network configuration. It’s up to the server and VMware guys to do the configuration which may or may not be what the network team is expecting. In addition, VMmotion throws a wrench in the works as well as you want network settings such as QoS, VLANs, etc. to move with the VM between physical hosts with no intervention.

HP offers their own solution for this problem using their FlexConnect architecture which fully supports end-to-end 10G Ethernet. You can read more about it here. HP also has a FlexConnect vs. Nexus whitepaper here (requires free HP Blade Connect membership). In addition there’s a HP FlexConnect cookbook that goes through many scenarios of configuring your Cisco switch to support various configurations here (requires free HP Blade Connect membership). The complexity that I saw in the cookbook was a bit of a turn off, thus I’m continuing to delve deeper into the Cisco Nexus architecture. For a great blog that’s more Cisco oriented and has great technical information, check out this link.

As I dive deeper into the pros and cons of each approach I will post additional blogs.