DBI328: Building the Fastest SQL Servers

Brent Ozar, Microsoft Certified Solutions Master (MSCM), www.brentozar.com

This was a REALLY great session that was both practical and filled with great technical details and good take away information. #1 takeway is “TempDB is like a public toilet: You never know what’s in there.” LOL Clearly Brent Ozar knows his stuff, and has a ton of resources on his web site. This session was focused on building the fastest SQL server possible, and making it easy. If you are DBA or run SQL in your environment (and who doesn’t if you are a MS shop), even if you don’t need blazing speed, this session had a lot of good sizing and performance tips. Below are several links to additional resources on his site:

SQL Server Setup Checklist
SQL Perfmon Counters
SQL Virtualization Best Practices
SQL IO Performance Testing
SQL 2012 Data warehouse reference design

He was flying through the slides and they were packed with content, so I didn’t get everything down. If his session is posted on Channel 9, check it out. His slides weren’t posted when I wrote this, so I can’t fill in the missing details. Even if you aren’t using the reference hardware in some of the links below (like HP and Dell) it still gives you great sizing and performance data you can translate to your own hardware.

  • How Microsoft Designs SQL Server Appliances
    • Systematically review thousands of SQL servers
    • Distill use cases down to a few common patterns
    • Choose HW components that are very likely to work great for those patterns
    • Publish an incredibly detailed test checklist to make sure the hardware is working as designed
  • Session Agenda
    • Define common SQL server use patterns
    • Understand the right hardware for a pattern
    • Recognize the server designs we can reuse
    • Learn how to test our own hardware
  • Define common SQL server use patterns
    • OLTP: Transactional Processing
      • How it’s accessed: Inserts
    • Data warehousing
      • Loaded in short windows overnight, then read-only with big reads through the day for reports
      • Just a few tables, but many historical records in each table, and often over 1TB of data
      • 10 years of sales history, stock prices, patient history, etc.
    • The real difference: OLTP (batch users requests per second) wants its to finish instantly, data warehouse can wait a bit longer (say 30 seconds)
  • The Right Hardware for Pattern OLTP
    • Hardware at it’s simplest: Memory 64GB, Drives 100GB, CPU
    • OLTP scenario: 50GB table
    • Right hardware for the fastest OLTP
      • Instant queries = cache all data in memory
      • Minimize data size (drop extra indexes)
      • Wide stripe data across all drives in the array (even log files)
    • Instant transactions = blazing fast log file
      • SSD RAID10 for multiple databases
      • Can get away with dedicated RAID10 magnetic for single DB
    • Avoid locking issues = blazing fast TempDB with RCSI
      • Read committed snapshot isolation
      • Bing: TempDB files SGAM contention – Multiple files for TempDB (1/8 to 1/4 the number of cores = 1 TempDB files); E.g. 16 cores = 4-8 data files
      • DO count hyperthreads as cores for this calculation
    • I didn’t say blazing fast data drives
      • Once the data is in cache, data drive speed rarely matters
        • SQL server restarts will mean slow performance
        • Aggressive monitor data size, memory size
        • When you run out, things get ugly fast
      • Bottom line: Cache the whole DB in memory and not much else matters
  • The Right Hardware for Pattern Data warehouse
    • Hardware at it’s simplest: 256GB memory, 1TB drives, CPU
    • Instant queries = Maximize memory size
    • Minimize data size (drop extra indexes, right-size fields)
    • See data warehouse links at the beginning of this article
    • Maximum CPU core consumption Rate (MCR) 200MB/Sec good rule of thumb
  • Reference Material
  • Test Storage Quickly with CrystalDiskMark
    • Pick 5 tests, 4000MB test file, drive letter
    • Only look at the sequential and 4K QD32 (queue depth) results
    • Sequential: Roughly akin to backups, large table scans
    • 4K QD32: Vaguely similar to active OLTP server or TempDB
    • MCR is most similar to Sequential read metric
  • Test Storage Slowly with SQLIO
    • See link at start of article for SQLIO tips from Brent
    • Lots of possible options collect the whole set
    • Use a test file larger than your SAN’s cache (say 20GB)
    • Don’t run on a live server
    • Only look at these numbers from the output: IOs/Sec and MBs/sec (MCR)
    • Test drives of different sizes but doesn’t need to test all drives
  • Your Goals
    • Test with CrystalDiskMark to get a quick idea
    • Try two simultaneous CrystalDiskMark tests against two different drive letters to see if your multipathing works
    • When that works, amp up to SQLIO and really push it
  • How to Reduce Storage Throughput Needs
    • Keep memory free for SQL server data caching
    • Merry-go-round scans with SQL server enterprise edition make a huge difference in storage performance and throughput
    • Give OS 10% of the total server memory, or 4GB, whichever is GREATER
  • Defined common SQL server use patterns
    • OLTP: I want the query to finish instantly
    • DW: I want the query to finish in 30 seconds
  • Very important to perform SQLIO performance baseline…EXTREMELY IMPORTANT
  • If you virtualize, only use one instance per VM.

    Related Posts

    Subscribe
    Notify of
    0 Comments
    Inline Feedbacks
    View all comments