Archives for September 2009

HP BladeSystem Power Options – How to choose?

If you haven’t purchased the HP BladeSystem before, the power options may be a bit confusing. In particular, if you want to buy the large C-7000 chassis, and want to minimize the number of outlets you have a variety of options.

Each C-7000 chassis when fully populated with power supplies needs SIX power inputs, which would typically mean SIX outlets. Six?!? Yes, it’s power hungry. But to help reduce the cable mess and expense of installing six high current outlets, you can purchase one or more PDUs. PDUs (power distribution unit, AKA a honking big power strip) are also great for racks which may have two or three C-7000 chassis in them so you don’t need 12 or 18 outlets.

What are your options? First, I’d plan for the worst case which is two or three C-7000 chassis in a single rack. To know approximately how much power each chassis will require, use the HP BladeSystem power sizer. Second, check out the available PDUs from HP’s web site. For their honking large PDUs, see this link. For the BladeSystem, you only need the ‘core’ PDUs since the chassis plugs directly into the core PDU devices.

For redundancy, you should double the number of PDUs that you need to support your given rack load. You can then connect three power inputs from one PDU (or two depending on your load and PDU selected), and three inputs from another PDU (or two).

Each of the HP PDUs has a kVA rating. Consult the results of the HP BladeSystem sizer tool for the total kVAs that you need. Now that you know the number of kVAs you need, you should contact your facilities person and ask them what kind of power is available. Single phase? High voltage? Three phase? How many amps per circuit? Does your facility have dual power feeds or dual UPSes? Their input will likely determine what module you need to purchase.

For redundant PDUs, here are some quick rules of thumb:

— Two 24A three-phase PDUs can support one chassis
— Two 40A three-phase PDUs can support two chassis
— Two 60A three-phase PDUs can support three chassis

One critical fact you should be aware of, is that the three-phase PDUs have single phase outputs. So when you purchase your chassis, order the single phase C-7000 power option to go with your three phase PDUs. I’d stay away from three-phase chassis, as it’s better to go with three-phase PDUs and use a single-phase chassis power module to minimize the number of wall outlets.

Cisco Nexus 1000v Documentation

Cisco has a good set of documentation for the Nexus 1000v. I’ve provided the direct links below. The compatibility guide seems to be frequently updated, so always be sure to download the latest version prior to deployment. For a good overview of the 1000v and all of the components, review the deployment guide.

General Information
Cisco Nexus 1000V Release Notes, Release 4.0

Compatibility
Cisco Nexus 1000V and VMware Compatibility Information, Release 4.0

Deployment
Cisco Nexus 1000V Series Switches Deployment Guide

Install and Upgrade
Cisco Nexus 1000V Software Installation Guide, Release 4.0
Cisco Nexus 1000V Virtual Ethernet Module Software Installation Guide, Release 4.0

Configuration Guides
Cisco Nexus 1000V License Configuration Guide, Release 4.0
Cisco Nexus 1000V Getting Started Guide, Release 4.0
Cisco Nexus 1000V Interface Configuration Guide, Release 4.0
Cisco Nexus 1000V Layer 2 Switching Configuration Guide, Release 4.0
Cisco Nexus 1000V Port Profile Configuration Guide, Release 4.0
Cisco Nexus 1000V Quality of Service Configuration Guide, Release 4.0
Cisco Nexus 1000V Security Configuration Guide, Release 4.0
Cisco Nexus 1000V System Management Configuration Guide, Release 4.0
Cisco Nexus 1000V High Availability and Redundancy Reference, Release 4.0

Reference Guides
Cisco Nexus 1000V Command Reference, Release 4.0
Cisco Nexus 1000V MIB Quick Reference

Troubleshooting and Alerts
Cisco Nexus 1000V Troubleshooting Guide, Release 4.0
Cisco Nexus 1000V Password Recovery Guide
Cisco NX-OS System Messages Reference

If you are starting out your adventure with the Nexus 1000v, these documents should be required reading. At first the whole concept of a virtual switch running on a hypervisor may seem complex and a bit daunting. However, once you wrap your head around the various pieces it actually makes a lot of sense and may not be as complicated as you think.

Purple Screen of death with ESXi 4.0

Today I burned the HP version of ESXi 4.0 to DVD, since I wanted to test the exact same version at home that we will be using at work. My home computer is a generic Asus computer, but I figured the extra HP CIM providers wouldn’t hurt. I got as far as fully installing ESXi 4.0 and booting into the environment.

After a couple of minutes, my computer would crash with a purple screen of death. One of the key strings I found on the screen was smbios_get_64bit_cru_info. A little Googling showed this was a HP watchdog timer tied to the ILO2 ASIC.

Apparently the HP version of ESXi relies on a heartbeat from the iLO2 ASIC to ensure the server is still alive. Since my system doesn’t have an ILO2, it would crash when the timer timed out.

I don’t know if the Dell and IBM versions have similar watchdog timers. Unless you have Dell or HP hardware, then stick with the generic ESXi installation. Seems pretty obvious, but I was trying to save DVD media and a download.

Making vCenter or vSphere client work on Windows 7 and Server 2008 R2

Unfortunately at this time VMware does not support the vSphere client or vCenter on Windows 7 or Server 2008 R2. If you try and launch the client you will get:

Error parsing the server “SERVER IP” “clients.xml” file. Login will continue, contact your system administrator

There are many threads over on the VMware communities about this problem, and there’s a workaround which I’ve been successfully using. But more recently a member wrote a powershell script which automates the ‘hack’.

Check out this blog post to download the script. I have no idea when VMware will release an update to fix the problem, but I hope its soon. I would guess it will come with official support for Windows 7 and Server 2008 R2. Since MS will officially launch them next month, I would guess official support is not too far off into the future.

Cost effective VDI for remote offices

During VMworld I attended a good session on VDI (virtual desktop infrastructure) for remote users, presented by HP. To support ROBOs (remote office/branch office) there are two basic architectures.

The first is a centralized approach, where one or two datacenters host all the required servers and storage to support all VDI users. This is great for centralized administration, protecting data inside the datacenter, and minizing costs by pooling servers and storage. However, users are then dependent on the WAN and bandwidth/latency/jitter are can be big factors. If the ROBO has POS (point of sale) or other critial users, it’s totally unacceptable to be dependent on the WAN. In addition, existing VDI technology is generally not as robust across the WAN and has limited support for 3D graphics, video, CAD/CAM, etc.

The second approach is a distributed model where each ROBO has their own self-sufficient set of servers and storage, which is not dependent on the WAN. One major barrier to this deployment is the cost of shared storage if you want to use any of the HA features of ESX such as vMotion, HA, FA, DRS, DPM, etc. Even a small 10TB iSCSI array can cost $35K or more, which is not cost effective to support a handful of users.

To address the shared storage cost, the speaker presented a novel idea. HP’s Lefthand iSCSi array can be purchased as a virtual appliance. List price for the virtual appliance is less than $5K. What is the Lefthand iSCSI VSA? It’s a VM which runs on top of ESX and turns local non-shared disk storage into shared iSCSI storage. You can cluster the VSAs across multiple physical hosts for increased performance and redundancy. Lefthand is also compatible with ESX SRM, and supports WAN data replication, snapshots, and thin provisioning.

With this architecture, a small ROBO could have two or three ESX hosts, clustered VSAs, and fully support vMotion, HA, FT, DRS, and DPM for all VDI users. For disaster recovery, you could also implement SRM.

For small remote offices, this approach could be very appealing and really cut down on the cost per user of VDI. Since the VDI servers are local to the user, you would get the best performance and richest desktop experience. Rack space and power requirements are also reduced, since no external storage array is needed.

Shopping for an enterprise array? Let’s play 20 questions, Part 2

Here is the second batch of enterprise storage array questions you can ask prospective vendors. Of course depending on your specific requirements, you will likely have additional questions, and some in this list may not be relevant.

Array Partitioning (Virtual Arrays)

Note: Array partitioning is a fairly rare feature on arrays which let you chop up an array into smaller virtual arrays for the purpose of delegated administration. This could be useful for multi-tenant arrays where various business units have data stored on the same physical array but business policies require separate administrators manage their ‘slice’ of the array. Administrator “A” could be prohibited from even seeing LUNs or storage ports assigned to business unit “B”. Healthcare, financial, and Government industries come to mind.

1. Does the array support any type of array administrative ‘partitioning’?
2. How many partitions are supported, and what can you scope (physical disks, host ports, LUNs, general array config changes, replication sets, etc.)
3. Does it support role based administration for each partition?

WAN Data Replication

1. What type of data replication options does the array support? (Synchronous, asynchronous, periodic, etc.)
2. If 1K of data is changed, what is the smallest increment of data that is replicated?
3. Does the array have any replication de-duplication or WAN optimization built-in to minimize bandwidth usage?
4. Does the array support 1 to N, N to 1, or multi-hop data replication?
5. What protocols can be used for data replication (iSCSI, FC, FCP, etc.)?
6. If you reverse the replication roles, such as recovering from a disaster and failing back to the primary data center, is a full re-sync needed or can the array do delta syncs?
7. If the WAN link goes down between replication partners, how are incoming writes handled? Are they written to a journal, is write order preserved?
8. If a journal is used, is it stored in memory or on disk?
9. What happens if the journal wraps around?
10. Certified for use with any WAN optimizers (Riverbed, Cisco, etc.)?
11. How many replication consistency groups are supported per array and how many LUNs in each group?
12. Is concurrent bi-directional replication supported?
13. Can a volume be concurrently replicating and snapshot?
14. Can a read-write snapshot be replicated?
15. Can thin volumes be replicated?
16. What is the maximum number of mirror pairs?
17. Can mirror pairs use different RAID types? (Replicate RAID-1 volume to a RAID-5 volume)

RAID and Drive Intermix Support

1. Are there restrictions on mixing RAID levels within the array?
2. Are there restrictions on mixing drive types within the array? (FC, SATA, FATA, SAS, SSD)
3. What are the RAID types and disk group options? (RAID 1, 2+2, etc.)
4. Can you limit RAID groups to a specific set of physical disks?
5. Are any array functions limited during a failed drive rebuild?
6. Does the array use traditional RAID disk groups for storing data, or does it use virtual RAID where blocks of data are scattered throughout the array?
7. Can the array wide stripe a single LUN over dozens or hundreds of drives?
8. Does the wide striping require the use of meta-LUNs or require manual configuration?
9. If SATA drives are used, is any type of read after write required for data verification?

Data Integrity and Security

1. Does the array calculate end-to-end checksums for all data to verify integrity?
2. Does the array store any checksum data with each block that is verified on read/write?
3. Does the array have any type of data at rest encryption?
4. Background disk scrubbing/verification of data?
5. Data shredding/secure delete conforming to DoD standards?
6. Support SMART enabled disks?
7. Is the array compatible with NetApp Decru or Brocade SAN encryption devices?
8. Does the array support SSH for remote CLI control?
9. Has the array or software been common criteria certified?
10. Does the array have any FIPS certifications?

Installation and administration

1. Can the customer install, configure, and perform routine maintenance on the array?
2. Under what circumstances must a factory technician perform maintenance?
3. How much professional services are recommended for a typical installation?
4. Can all configuration be done via a GUI, or is command line/scripting needed?
5. Can the customer place the array into their own rack?
6. Can other equipment co-exist in your racks? (SAN switches, tape library, etc.)
7. Do you ever need to manually balance LUNs, host port assignment, controller ports, disk groups, etc. to ensure optimal and even performance?
8. Is there any support for ‘dark sites’ where a customer must do the install and all maintenance?
9. If the array is not connected to the internet, is there a stand-alone facility to alert operators on array failures or predicted failures?
10. Can the customer retain failed hard drives, and how does this impact the warranty?

Availability

1. Does the array support hot firmware updates?
2. What type of interruption is there during hot firmware updates? (ports taken temporarily offline, controller reboot, etc.)
3. Is data distributed such that an entire shelf/magazine failure would not cause data loss?
4. Target hardware availability (how many 9s)?
5. What type of internal redundancy is there?
6. What components can be replaced hot?

VMware ESX Integration

1. Does the array have an VMware ESX specific integration software/features, such as a vCenter snap-in?
2. Does the array support application aware (VSS) snapshot integration with VMware VMs?
3. Is the array certified for VMware SRM?
4. Any roadmap for value added features to vStorage?
5. If the array is ALUA, does it support ALUA round robin access with vSphere?
6. Are there any array specific vSphere multi-pathing add-ons or enhancements?
7. Does the array support NPIV?

Windows Server Integration

1. Hardware VSS provider (server 2003/2008 x64)?
2. MPIO driver (server 2003/2008 x64)?
3. When will server 2008 R2 be supported?
4. Does all of the management software support being installed on server 2008 x64?
5. Any type of advanced application recovery for SQL 2005/2008 and Exchange 2007?
6. Management packs for MS Operations Manager 2007 to support alerting on array hardware faults or performance warnings?
7. Does your management software support Windows AD authentication and support Windows groups for role based access?
8. Powershell interface?

Miscellaneous

1. Support SMI-S, if so, what version?
2. Upgrade path to next model? (forklift, in place, non-disruptive, etc.)
3. Any deep integration with a particular backup vendor? (Netbackup, CommVault, etc.)
4. IPv6 support for iSCSI or management?
5. What version of SNMP is supported?
6. If there’s an integrated service console, what OS does it run?
7. What type of power and outlets does the array require?
8. Built-in data deduplication?
9. Does the array support virtualizing external or third-party arrays?
10. Are there yearly software or hardware maintenance fees?
11. What are software licenses based on? (Capacity, per array, etc.)
12. Are there performance monitoring and alerting tools?

That wraps up my list of questions you can grill your storage vendor with. On the surface many arrays will appear to have the same or similar features but when you start to peel back the onion and dig deep into the array, you will often discover limitations. It’s important you discover these limits early on so you can decide which ones are show stoppers and which ones you can live with.

Shopping for an enterprise array? Let’s play 20 questions, Part 1

Enterprise class storage arrays are technological marvels, and thus are the most expensive storage on the planet. Think you can slap in a 1TB drive for $200? NOT! Think $20K or more per TB. But what do you get for that $20K? Well that’s a bit more tricky, and where you end up with an array with hidden limitations which can cost you down the road in both CAPEX and OPEX.

Over the years of working on enterprise storage projects, I’ve developed a long list of questions which I pose to candidate companies to both screen them for basic features, and have a lengthy evaluation criteria to help judge the array’s feature set. Earlier this year I blogged some of my questions, but got too busy to post the whole list. I’ve since revised the questions, and I’ll try to get all of them posted in a few installments.

If you are doing virtualization (and who isn’t?) then performance, capacity, ease of use, and support from your hypervisor is key. For VMware ESX users, a true active/active concurrent controller design can be a major performance boost.

Controller Design

1. What is the controller architecture? (Active/active concurrent, Active/active non-concurrent, Active/passive)
2. What is the controller to controller connectivity? (Full cross-bar switch, full mesh, point to point, etc.)
3. Minimum and maximum number of controllers?
4. Do the controllers feature any proprietary hardware or ASICs?
5. Maximum number of Windows/Linux/ESX hosts?
6. Cache size and is it mirrored and ECC protected? Separate control and data caches?
7. Minimum/maximum number disks?
8. Maximum storage capacity?
9. Disk capacity, interface speed, spindle speed, and types (FC, SATA, SAS, FATA, SSD)?
10. How is cache memory protected in case of power failure and for how long?

LUN Management

1. Minimum/maximum LUN size?
2. Maximum number of base LUNs and presented LUNs?
3. Non-disruptive dynamic LUN expansion/shrink?
4. Is LUN concatenation required for LUN expansion or large LUNs?
5. Maximum LUNs per storage controller host facing port?

Connectivity

1. What are current host connectivity options? (4Gb/8Gb FC, 1Gb/10Gb iSCSI, FCoE, etc.)
2. Base number and maximum number of host facing ports and type?
3. Base number and maximum number of back-end ports and type?
4. Can connectivity be easily upgraded by the end user to future technology?
5. Can you hot-add controller ports? Must a controller be taken offline?
6. Any near-term plans for more connectivity options or speeds?
7. Do host ports need to be dedicated to remote replication?
8. Does the array support NFS? What version?

Snapshots

1. What is the maximum number of read-only and read-write snapshots per LUN?
2. Total snapshots per array?
3. Is a separate pool (reserve) of disk space required for snapshot use?
4. Are snapshots space efficient (copy on write)?
5. Does the array support snapshot consistency groups?
6. Max LUNs per consistency group and groups per array?
7. Can you snapshot a RAID-1 volume, for example, to RAID-5?
8. Can you schedule snapshots? GUI or CLI?
9. Can snapshots be set to automatically expire? GUI or CLI?
10. Can a snapshot have a read-write snapshot?

Thin Provisioning

1. Does the array support thin provisioning?
2. Will the array reclaim previously allocated but now unused space? If so, is it enabled with vSphere?
3. Can a ‘fully provisioned’ LUN be converted to a thin LUN?

Performance/QoS

1. Does the array differentiate between various performance ‘zones’ of a disk, such as outer tracks and inner tracks?
2. What performance tiers can you create? (RAID1, 15K FC disks, outer tracks vs. RAID5, SATA, inner tracks, etc.)
3. Can LUNs be migrated non-disruptively between tiers?
4. Is there any automated data progression between performance and protection tiers?
5. If so, is this at the LUN level or more granular such as block level? For example, could a single LUN have some blocks on high performance disks while other less used blocks are on nearline disks?
6. Can you assign LUN performance parameters based on workload (Exchange, SQL, VMware, etc.)?
7. Can you partition the cache or set any type of QoS?

Whew! Look for another installment or two to wrap up the list of potential questions you can ask your array vendor.

Geekdom with VMware Storage – Know thy array!

Today I attended a killer session at VMworld 2009 on getting the most performance out of your disk array with VMware. After the session, my head was spinning with all the new information and the realization that optimizing your vSphere storage can be non-trivial. If you attended VMware world, please listen to session TA2467. Understanding VMware storage concepts is critical.

The speaker covered all supported network storage protocols, including Fibre Channel, iSCSI and NFS. The primary root cause of poor performance in a virtualized environment is poor storage performance. VMware vSphere 4.0 has massive improvements in the storage stack for all protocols. However, it’s NOT a simple plug and play matter if you really want to optimize your environment. Both iSCSI and Fibre Channel require low-level knowledge about ESX and your storage array to make the most of your disk array.

Each protocol and each version of ESX have very specific requirements and storage multi-pathing implementations that you MUST understand. Many concerns are array specific, so you get with your storage vendor and read their whitepapers on best practices for VMware. VMware has their own set of whitepapers, which should be required reading prior to any deployment. You can read their Fibre Channel configuration guide here. Also, tripple check that your exact storage configuration is supported in the VMware storage compatibility guide. Many customer problems can be traced back to running in an unsupported configuration.

In vSphere 4.0 VMware added native multi-pathing, but even with this major enhancement you still need to deep dive into your array to understand how it handles LUNs, load balancing, and what you need to do to tweak settings. For example, one critical feature of your array that you must understand is what the controller architecture is. Is it active-active concurrent, active-active non-concurrent, active-passive, etc. If you only ‘know’ your array is active-active, it’s vital to know if it’s concurrent or non-concurrent. Only A/A concurrent is active-active in the eys of vSphere.

How do you tell and what’s the difference? First, ask your disk vendor if you aren’t sure. Secondly, look at how much you spent on the array. 🙂 If you mortgaged your business to buy the array and got an EMC Symmetrix or HDS USP, you have A/A concurrent controllers. Or, if you did your market research and found other storage vendors like 3PAR or Compellent, you also have A/A concurrent controllers but didn’t die of sticker shock from the price tag. If you went the ‘safe’ mid-range route and bought something like the HP EVA, EMC Clariion or any NetApp then you don’t have concurrent controllers and are active/passive in vSphere speak.

If you are lucky enough to have an A/A concurrent controller, all is not well out of the box. The default pathing mode is fixed path which doesn’t make the most of your array. You can change the mode to round robin, but even that simple change isn’t optimal. By default round robin doesn’t alternate single I/Os down each path. Instead it sends a stream of 1000 I/Os at once, then changes the path. This can lead to a non-optimal configuration. If your storage vendor concurs, changing this value to ‘1’ will evenly distribute the load among all paths. But check with your array vendor before changing anything!

According to EMC, even the native round robin feature in vSphere leaves something to be desired. So they wrote PowerPath VE, which completely replaces the vSphere MPIO subsystem. News to me was that PowerPath VE works with many non-EMC arrays as well as EMC arrays. EMC claims that PowerPath VE can increase storage performance 30% to 300% over native MPIO. They have a 45-day trial version you can download and try out, to let you measure how much it may help you out. Can’t hurt to try it out, as long as your array is on their compatibility list.

iSCS has an entirely different set of issues, which I may tackle in another blog. If you want to check out the blog for the presenter of the session, you can see it here. The session was jam packed with information, and this post only covered a tiny piece of his fire hose of information.

NxTOP Client hypervisor is here!

In a previous blog I mentioned that VMware is working on a client hypervisor called the Client Virtualization Platform (CVP). That product is not yet released, but I found another vendor (NxTop) on the showroom floor which IS shipping a client hypervisor TODAY. In fact, they have numerous DoD customers like Raytheon, Lockheed Martin, Air Force, and others. Apparently the Air Force is really liking the solution.

The company is called Virtual Computer and is a new company, not even two years old. Their product is called NxTop and addresses the full end point computing problem of OS deployment, policy control of devices, storing and syncing user data, OS patch management, and data encryption.

Today they support wireless, USB devices, smart cards, and many other devices. Unlike VMware’s CVP, the hardware does NOT need Intel vPro, allowing it to run on a wider range of hardware. They don’t support 3D graphics today, but in Q1 of 2010 they plan to add full 3D support so you can use Aero Glass or other graphical intensive applications. They claim organizations can install NxTop and be running in less than an hour.

Check them out if you are in the market for a client hypervisor to make managing your desktop environment easier and more securely.

Tiered storage thoughts for vSphere

During one of the sessions today on storage sprawl the speaker recommended a three tiered storage architecture. He suggested:

Tier 3: OS images; SATA storage
Tier 2: General application data and binaries; FC storage
Tier 1: Page files and databases; FC storage

His rationale was once an OS is booted, very few I/Os occur on the OS volume, so it can easily live on SATA storage. Tier one and two require more IOPS, and thus probably need higher speed and more expensive FC storage. If you use SRM, that will likely impact your tier design as it has some restriction on datastore architecture. At the very least, his three tier architecture is something to consider.

© 2017 - Sitemap