VMworld 2012: vSphere 5 Storage Best Practices INF-STO2980

Speaker: Chad Sakac, Vaughn Stewart

This was a really great session! Had two of the best sessions on the last day of VMworld.

  • Each protocol has different configuration considerations
  • Majority of customers use block protocols (iSCSI, FC, FCoE)
  • NetApp: NFS 50% usage, block the other 50% from autosupport data
  • Best flexibility come from a combination of VMFS and NFS
  • Key Point 1: Leverage Key documents
    • VMware technical Resource Center
    • FC SAN Config Guide
    • iSCSI SAN Config Guide
    • Best practices for NFS storage
    • Key partner documents – Best practices
  • Key Point 2: Setup Multipathing Right
    • vSphere Pluggable Storage Architecture (SATP) has several components
    • Don’t be inclined to make changes to the defaults..makes it more complicated and adds risk to the design. Don’t change the claim rules or I/O defaults.
    • PSP – Path Selection Policy
      • Fixed – Used commonly on active-active arrays
      • MRU – Default for many active-passive arrays
      • Round Robin – Default in vSphere 5.1 for EMC VNX/VMAX.
    • MPP
  • ALUA – Asymmetric Logical Unit Access. Common on mid-range arrays like NetApp and EMC VNX, and many other brands. Not true active/active for all paths and all LUNs.
    • Active – Optimized
    • Active – Non-optimized
    • Standby
    • Dead – APD – Target/array toally dead
    • “Gone away” – PDL – Can reach the array, but device such as LUN went away
  • Multi-Pathing with NFS
    • Significantlly different multi-pathing architecture than block protocols
    • NFSv3 is very basic in terms of understanding of multi-pathing
    • Must rely on switching technology for link aggregration
    • Single TCP connection from the ESXi for data and control information
    • Active/Passive path today until a future release of vSphere with NFS4
    • Use vendor specific vCenter plug-ins to enhance NFS support/configuration
  • Microsoft Cluster Service
    • Unsupported Storage Configuration – FCoE, iSCSI, NFS, Round Robin PSP, NPIV
    • Vendor support: 3rd party MPPs or Guest connected storage
    • Use iSCSI in guest – works very, very well (storage partners support this)
    • vSphere 5.1 has expanded support – up to 5 node cluster support
  • NFS Best practices
    • Use vCenter plug-ins, always! Automates configuration and tweaks
    • You can use FQDNs now and it will work
    • NetApp Cluster-Mode requires one IP per datastore
  • Jumbo Frames?
    • Recommendation is to NOT use jumbo frames. Adds more complexity, and performance increase is very marginal.
    • Stick with standard block sizes.
  • Optimize I/O
    • Misalignment of filesystems results in additional work on storage conroller to satisfy IO request
    • Affects VMFS and NFS datastores
    • Align the guest partitions! Automated in fresh installs of Windows Server 2008/Win7 and later
    • Linux will likely not align partitions. Must manually align paritions.
    • EMC UBerAlign – Free tool
    • NetApp – Data Ontap 8.1.1 – VSC plug-in Migrate and Optimize
  • Leverage Plug-Ins (VAAI and VASA)
    • 5.1 changes: vCenter Client plug-ins, NFS assists, block assists. See Chad’s Blog post
  • Keep It Simple
    • Use large capacity datastores
    • Avoid extents
    • Avoid RDMs
    • Array end – Use thin volumes and LUNs
  • Storage DRS
    • Use it! Even if in manual mode, it will make recommendations

Print Friendly, PDF & Email

Related Posts

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments