Adding a GUI Back to Windows Server Core

The other day I had the occasion where I wanted to add back the Windows Server 2012 R2 GUI to a server core installation. This was a test environment, and for what I was testing I felt the GUI provided a more streamlined experience. Server core certainly has its places, and is great as a hypervisor, appliance, or in high security environments. Installing the GUI, while not difficult, it look quite a bit of Googling and trial and error to find a command that actually worked.

1. RDP into your Core install or use your server’s IPMI/VM console feature, and a command prompt should open. Type powershell.

2. From your original install ISO, copy the \sources\install.wim to your core server.

3. Type the following command and wait several minutes for the install to complete. Include the full path to where you copied your install.wim file.

install-windowsfeature -name server-gui-shell -includemanagementtools -source:wim:c:\install.wim:2

2014-10-15_14-05-02a

 

4. After the installation is complete, reboot the server. The reboot process will be quite slow, as it will be configuring the new features for several minutes. Be patient.

2014-10-15_13-55-02

SQL 2014 Always-on AG Pt. 11: File Share Witness

Now that we have our SQL AAG up and running, there’s still some configuration left to do. In this installment I cover SQL 2014 file share witness confiugration. In my example I’m doing a 2-node AAG, which means that we need a file share witness to help establish quorum. If you have a NAS appliance, you can easily create a share on there and use it. In my case I’m assuming 100% Windows, so we will be using a third member server as our FSW. There’s nothing too special about this FSW, except for some permissions. Storage space is very minimal.

Blog Series

SQL 2014 Always-on AG Pt. 1: Introduction
SQL 2014 Always-on AG Pt. 2: VM Deployment
SQL 2014 Always-on AG Pt. 3: Service Accounts
SQL 2014 Always-on AG Pt. 4: Node A Install
SQL 2014 Always-On AG Pt. 5: Unattended Node B
SQL 2014 Always-on AG Pt. 6: Cluster Configuration
SQL 2014 Always-on AG Pt. 7: TempDB
SQL 2014 Always-on AG Pt. 8: Max Mem & Email
SQL 2014 Always-on AG Pt. 9: SQL Maintenance
SQL 2014 Always-On AG Pt. 10: AAG Setup
SQL 2014 Always-On AG Pt. 11: File Share Witness

Create File Share

1. On a WS2012/R2 member server (not either SQL server) open Server Manager, go to File and Storage Services, click on Shares, then from the Tasks menu select New Share. If you don’t have that option, add the File Server Role and wait for the installation to complete. No reboot is needed.

2014-10-13_19-14-01

2. Select the SMB Share – Quick file share profile.

3. Select the appropriate volume the share will be created on.

4. Use a share name in the format of: <Cluster name>-FSW (e.g. SDSQL03-FSW).

5. Enter a description in the format of: <Cluster Name> Cluster File Share Witness.

6. Uncheck allow caching of share and enable encrypt data access.

7. Customize the permission, disable inheritance and remove all inherited permissions.

8. Give the cluster computer object (e.g. SDSQL03) full control. If you want, you could also give administrators access so they can peek inside.

2014-10-13_19-08-51

9. Finish the wizard and wait for the share to be created.

SQL 2014 File Share Witness Configuration

1. On a SQL server launch the Failover Cluster Manager.

2. Right click on the root cluster object (e.g. SDSQL03.contoso.local), select More Actions and then click Configure Cluster Quorum Settings.

3. Select Select the quorum witness.

4. Select Configure a file share witness.

5. Enter the file share path. Click through the remainder of the wizard and verify the FSW was successfully configured.

6. Verify Quorum configuration is now using a file share witness.

2014-10-13_19-26-10

 Summary

In this installment we configured the SQL cluster to use a file share witness. This is needed when you have an even number of servers in the SQL AAG. You can use either a NAS appliance, or another Windows member server. In the final two installments we will configure Kerberos and SSL.

SQL 2014 Always-on AG Pt. 10: AAG Setup

2014-10-12_18-34-23After a brief pause in this SQL 2014 AlwaysOn Availability Group series, we are resuming with the server configuration. In this installment we will configure the Windows firewall to allow the two SQL servers to communicate to each other, configure a shared folder for the SQL replication, and finally configure the AlwaysOn feature. Yippee!

Throughout this series I’ve maintained a standard naming convention for all of the computer objects needed for this installation. In the end we will end up with five, yes five, computer objects for this cluster. Two are the SQL server nodes themselves, one for cluster aware updating (-CAU), one for the AlwaysOn Availability group (-AG1), and one for the AlwaysOn Availability group listener (-AG1L). The listener is the name which you would use in ODBC connections to talk with the SQL AlwaysOn cluster and the clustered databases. Don’t use any of the other computer names, as they may appear to work but would result in problems when failing over between nodes.

You can host non-AAG enabled databases on these SQL servers, but would of course not be protected by AAG. In that case you would use SQL server node name in the ODBC connection name, not the AAG listener. Mix and match as you wish, depending on your business requirements.

Blog Series

SQL 2014 Always-on AG Pt. 1: Introduction
SQL 2014 Always-on AG Pt. 2: VM Deployment
SQL 2014 Always-on AG Pt. 3: Service Accounts
SQL 2014 Always-on AG Pt. 4: Node A Install
SQL 2014 Always-On AG Pt. 5: Unattended Node B
SQL 2014 Always-on AG Pt. 6: Cluster Configuration
SQL 2014 Always-on AG Pt. 7: TempDB
SQL 2014 Always-on AG Pt. 8: Max Mem & Email
SQL 2014 Always-on AG Pt. 9: SQL Maintenance
SQL 2014 Always-On AG Pt. 10: AAG Setup
SQL 2014 Always-On AG Pt. 11: File Share Witness

Windows Firewall

1. Download my basic SQL AlwaysOn firewall script from here. Open a Windows command prompt and type the following command:

SQL-Firewall-AG.cmd <AlwaysOn Partner IP>

2. Repeat the same process on the other SQL node, using the other node’s IP address.

Always-On Availability Group Preparation

1. On S drive (or another drive of your choosing) of your first SQL node create a folder called Replication. This is where SQL will stage the replication data.

2. Change the NTFS permissions on the folder and add the SQL service account with full permissions.

2014-10-12_16-58-04

3. Right click on the Replication folder and select the Sharing tab. Click Share. Verify the Database engine account is listed as a read/write member. Click on Share.

4.  Launch SQL Manager, right click on Databases and select New Database. Create a new database called SeedDB and leave all other parameters at their default.

5. Expand the SQL Server Agent node in the left pane and locate the DatabaseBackup — User_Databases – Full job. Open the properties of the job, click on Steps in the left pane, and edit Step 1. Modify the path of the backups to S:\Backup.

2014-10-12_17-10-27

6. If you want to use the other backup jobs, open them and modify the backup path as well.

7. Run the DatabaseBackup — User_Databases – Full job by right clicking on it and selecting Start job at step.

8. Open SQL Server Configuration Manager and locate the SQL Server service. Open the properties click on the AlwaysOn High Availability. Check the box to enable AlwaysOn.

2014-10-12_17-14-43

9. Restart the SQL service.

Always-On Availability Group Configuration

1. Open Active Directory Users and Computers (ADUC) and create a new computer object using the cluster name with a suffix of –AG1 (e.g. SDSQL03-AG1). Change the description of the object to SQL Cluster <Cluster Name> Availability Group.

2. On the first SQL server cluster node in SQL Server Management Studio navigate to AlwayOn High Availability, right click and select New Availability Group Wizard.

3. For the availability group name use the format of: <Cluster Name>-AG1 (e.g. SDSQL03-AG1). This should be the same name as step 1 in this subsection.

4. On the Select Databases screen the SeedDB database should be listed, and it should meet the prerequisites. Check the box next to the database and click Next.

5. On the Specify Replicas screen add the second SQL server cluster node. Configure the automatic failover and synchronous commit options as shown below. Make the second node a readable secondary.

2014-10-12_17-50-45

6. On the Listener tab create an availability group listener. For the DNS name, use the same name as the AG (e.g. SDSQL03-AG1) but with an “L” suffix added (e.g. SDSQL03-AG1L), using port 1433. Assign an IP address on the same subnet as the SQL server. If you don’t see the Add button under the network mode box expand the size of the window.

2014-10-12_17-56-247. Configure the data replication synchronization using the Replication file share on the first SQL node, as shown below.

2014-10-12_17-57-18

8. Review the validation results and verify all are successes.

2014-10-12_17-58-46

9. Proceed through the rest of the wizard and validate all steps are successful.

10. In SQL Server Management Studio right click on the AG and select Show Dashboard. Verify all objects are in a healthy state, as shown below.

2014-10-12_18-22-45

 Summary

Now that we have a SQL AlwaysOn Availability group up and running, there’s still a bit of configuration left to do. So future installments will cover configuring a file share witness, Kerberos, and SSL certificates. Yes, the all important SSL! Don’t quit now and think that you are done. While we are almost there, you will want to follow through the rest of the configuration. Check out Part 11 here.

Nutanix Announces All Flash Array

In very exciting news Nutanix announced the industry’s first hyperconverged all flash appliance, the NX-9240.  What makes this puppy special? Well it’s a 2U appliance which contains two compute nodes, each with 6 SSDs. The SSDs can be either 800GB or the 1.6TB models. This means up to 19.2TB of RAW flash in a 2U appliance. Fill half a rack with these monsters and you have nearly 200TB of high performance flash at your disposal.

2014-10-11_10-30-43

Nutanix has always had great performance, due to our hybrid approach of combining SSDs with HDDs. This works great for a large majority of workloads including VDI, Exchange, SQL, Oracle and general purpose server VMs. With integrated ILM, hot data is moved up to SSDs while colder data is moved down to the HDDs. But sometimes there are exceptional workloads, such database OLTP which need consistent I/O latency and predictable performance across the entire working set that only flash can bring. These workloads will feel right at home on the NX-9000 series platform.

2014-10-11_10-23-32

Why Nutanix?

One big advantage that Nutanix all flash has over other competing all flash arrays is our scale out controller architecture of one CVM per server node. As shown in my Running IOMeter on Nutanix blog post, we saw that I/O performance scales out as you add more controllers. Driving flash requires a lot of processing power, and dual controller arrays may start to peter out when pushing all flash performance. If you are doing array firmware upgrades or have a hardware malfunction and lose a controller your performance can be cut in half. With Nutanix, if you take a node down for maintenance then you only lose up to 1/n performance from your cluster. For example, in a 16 node cluster you would only suffer a 6% drop (assuming the cluster was being pushed to the maximum). In reality you probably wouldn’t even notice a node is down during normal workloads.

Another Nutanix advantage is the lack of rip and replace when you want to increase your storage performance. Instead of a fork lift controller upgrade to get better performance you merely add new nodes through the PRISM GUI in a few clicks. You’ve now just added many 10s of thousands of IOPS capacity to your cluster with minimal fuss. As new platforms come out, such as those based on Haswell, just add them to existing clusters and take advantage of the latest Intel CPU performance benefits.

Even More Models

In case you missed it, Nutanix also very recently announced the NX-8000 series, which has four SSDs and 20 HDDs. This is ideal for large scale Microsoft Exchange workloads (think thousands of mailboxes), or medium sized databases. From the NX-1000 series for small deployments through the NX-9240 for all flash, we have all of your hyperconverged bases covered.

Running IOmeter on Nutanix

Sometimes Nutanix customers or prospective customers will want to perform IOmeter testing to ensure the block is performing up to par. Or maybe they are upgrading to a new NOS release or doing a POC and want to ballpark performance. Performing benchmarks on a Hypeconvered platform like Nutanix is a bit different than the traditional 3-tier architecture. Nutanix has a concept of  CVMs (controller VMs) that run on each node (server). These CVMs service I/Os, and scale out 1:1 as you add more nodes. This gives rise to the great linear performance of the Nutanix platform. No more legacy two-controllers architecture that don’t scale up well.

In order to accurately assess the ballpark performance of a Nutanix cluster you will need to deploy at least one IOmeter VM per node, and configure that VM in a particular manner. Specifically, you will get the best I/O performance when spreading the workload across 8 virtual disk (VMDKs in the VMware world) and across multiple SCSI controllers. You also need to increase the number of vCPUs to 8, so Windows can better service all of the I/Os. And this might be news to some, but you don’t want to perform tests on NTFS formatted disks. What? Yes, you want IOMeter to use raw disks. Using NTFS formatted disks introduces an additional layer of caching and even lazy writes can be introduced into the picture.

Now that you understand a bit of the IOmeter configuration background, let’s deploy IOmeter and walk through all these steps.

Deploying IOMeter on Nutanix

1. Use the vSphere client and connect to the vCenter that is managing your Nutanix cluster.

2. Deploy your favorite Windows server VM template. This will be our IOMeter master, which we will clone later on.

3. While the VM is powered off, increase the number of vCPUs to 8.

2014-10-09_13-04-36

4. Add 8 VMDKs to the VM, spreading them evenly across the four SCSI controllers (e.g. 0:x, 1:x, 2:x, 3:x.). I would suggest 8x 20GB thinly provisioned disks.

5. Change the new SCSI controllers (controllers 1 to 3) to Paravirtual (PVSCSI). You can use LSI Logic SAS as well, but the PVSCSI controller is more optimized for high IOPS and lower CPU utilization. Leave controller 0 as LSI Logic SAS so Windows can boot.

6. Install IOMeter. You can download it from here. Download the IOmeter configuration from here. Unzip the configuration files. Power down the master VM.

7. Clone the IOMeter master VM so that you have one VM per Nutanix node. Disable DRS on the cluster (if enabled) and vMotion the new VMs so that you have one IOmeter VM per node. Power on the IOmeter VMs.

8. Launch IOmeter and load the 32k-sequential-write ICF file via the yellow folder icon in the upper left. Ignore any errors.

9. Click on the Computer name in the left pane then shift click and select all disks. Change *each* disk’s maximum disk size to 8000000.

2014-10-09_13-39-10

10. Click on the Green flag to start the test. Wait a few minutes for the green flag to become unghosted. This will write data to all of the drives. Exit IOmeter.

11. Locate the benchmark that you want to run from the IO configuration files that you downloaded. In my example I’ll use 4k-random-read-2-outstanding-IO. Double click the configuration file to launch IOmeter.

12. You should see 1 worker, and 8 VMware disks in right pane. Click on the Disk icon to add 7 more workers, for a total of 8. For each worker select the corresponding VMware virtual disk, as shown below. Worker 2 uses VMware disk 2, etc. Change the maximum disk size to 8000000.

2014-10-09_13-10-5713. Click on the Access Specifications tab and select the 4K random read and add it. Click on the green flag to start the test. Change the update frequency from the drop down menu to a reasonable value like 5. Wait a few minutes for the performance to stabilize. Perform the same benchmark in the other IOMeter VMs and wait for the IOPS to peak and level out. You are now testing the total cluster IOPS capacity. For example, if you have three nodes and each IOMeter instance is pushing 20K IOPS, then the cluster can sustain 60K IOPS.

Outstanding I/Os

One parameter within IOMeter that can have a big effect on the total IOPs is the number of outstanding I/Os per target. This parameter adjusts how many I/Os are “in flight” to the storage controller. As you increase the number of outstanding I/Os you increase storage latency, but it can also increase performance. There’s a balance between the number of outstanding I/Os and latency. For example, during a test run if I set the outstanding I/Os to 4, latency during 100% read tests was right at 1ms with 33K IOPS for a single node. Jacking it up to 16 outstanding IOs increase IOPS to 58K but latency went up to 2.1ms. Using 64 outstanding IOs latency jumped up to 7.6ms and IOPS peaked at 67K. For this particular test 16 outstanding I/Os gave a good balance between latency and IOPS.

2014-10-10_10-56-10

Summary

As you can see from this setup, the Nutanix performance scales linearly as you add more nodes. Each CVM processes IOs and as you add more nodes, you scale the CVMs and the total cluster performance. You can also see the scale-out IOPS performance within the VM by starting with one worker, benchmarking, then incrementally add workers. As you will see once you get to about 8 VMDKs and workers the performance tops out. You can also affect IOPS performance by changing the number of outstanding IOs. Latency increases as you increase the number of in flight IOs.

This design principal, of distributing workloads across multiple VMDKs within a VM carries across through our Exchange, SQL, Oracle and other best practices that Nutanix has published. This enables those applications to take full advantage of the web-scale architecture and linear performance that Nutanix brings to the table. Go hyperconvergence!

Sample VCDX-DCV Architecture Outline

This post covers my approach to writing my VCDX-DCV Architecture Guide. I’ve been debating in my mind for a while whether I should write this post or not. I hesitated for a few reasons. First, I’m just a regular guy that happened to jump through the VCDX hoops and have no “insider” information on how they score. Those that do know the scoring rubric can’t disclose it anyway. Second, there are 1000 different ways to write your VCDX-DCV architecture document. Third, there’s no “magic template” or “sure fire” outline that ensures your design gets accepted. Do not view this post as shortcut or cheat sheet.

What matters is your content, how it aligns to the VCDX blueprint, and that you convey expert level knowledge to the reader. It’s NOT about speeds and feeds, but rather the full traceability of customer requirements, constraints,  assumptions and risks throughout your design. Who cares if you’ve thrown every VMware product and feature at a solution if you haven’t met the business requirements? #Fail

So why did I publish this article? I know when I started the VCDX process it was a bit daunting to read the DCV blueprint and try to come up with an architecture guide that hit all the areas in a logical manner. I’ve heard from other candidates they experienced the same “VCDX writer’s block.” In fact several of us have scrapped our first attempts, and started over. Bottom line is you need to do what feels right to YOU, and what works for YOUR design while covering all the blueprint areas. You may not like my methodology or outline, which is perfectly fine and a valid way to feel.

I’ve also heard comments from VMware customers (like myself when I went through the process) that think since they aren’t a partner and don’t have access to the VMware SET templates that they are at a disadvantage. That’s not true,  IMHO. Yes the VMware SET docs are structured and may help you, but they aren’t directly aligned to the VCDX blueprint and need augmentation.

With all those caveats, I wanted to share my DCV architecture guide outline. Maybe it will help someone with writer’s block, or enable you to see some the areas that a VCDX design could cover. Your design may need additional areas, or less coverage. This is certainly not all inclusive, and it’s guaranteed your outline will be different. It is your responsibility to ensure your documents cover all blueprint areas, makes sense for your design, and something you feel comfortable with. Own your documentation.

Before I go any further, let me state that how I chose to incorporate the specific VCDX bootcamp book recommendations is somewhat unique to my style. Of the submissions I’ve seen none did it exactly this way, which proves that there is no “magic” template or style for VCDX submissions. I just felt it gave a better overall flow to the document.

You will see some common sub-sections in all design areas (e.g. cluster, storage, compute, etc.). For example, in most areas I had specific conceptual, logical and physical sections. This helped me show the traceability of customer input through the entire design process. Each major section also concludes with a Design Justification which is a summary of how I met the customer requirements and sites all of the applicable requirements, assumptions, constraints, and risks.

At the end of the Design Justification section I had two tables to help distill down the critical information. First, I had a summary table, shown below. All of the design quality items (e.g. C02) were referenced elsewhere in that section as applicable. Possibly overkill, but I liked the compact summary.

2014-10-06_17-38-00

The second table was that of the applicable design decisions, each with the decision, impact, decision risks (after all, nearly every decision has a risk), and risk mitigation. A sample design decision is below.

2014-10-06_17-49-53

WordPress was not cooperating with me for a clean outline format, so I’ve inserted a series of screen captures to maintain formatting.

Sample VCDX-DCV Architecture Outline

2014-10-06_16-48-18

2014-10-06_16-50-08  2014-10-06_16-52-34

2014-10-06_16-53-59

2014-10-08_8-39-36
2014-10-06_16-59-52

SQL 2014 Always-on AG Pt. 9: SQL Maintenance

Blog Series

SQL 2014 Always-on AG Pt. 1: Introduction
SQL 2014 Always-on AG Pt. 2: VM Deployment
SQL 2014 Always-on AG Pt. 3: Service Accounts
SQL 2014 Always-on AG Pt. 4: Node A Install
SQL 2014 Always-On AG Pt. 5: Unattended Node B
SQL 2014 Always-on AG Pt. 6: Cluster Configuration
SQL 2014 Always-on AG Pt. 7: TempDB
SQL 2014 Always-on AG Pt. 8: Max Mem & Email
SQL 2014 Always-on AG Pt. 9: SQL Maintenance
SQL 2014 Always-on AG Pt. 10: AAG Setup
SQL 2014 Always-On AG Pt. 11: File Share Witness

SQL Maintenance Script

Many people, it seems, just install SQL server and forget it about. They don’t realize that SQL server is just like a race car. If you leave it alone and don’t do routine maintenance it will fall apart. To that end, Ola Hallengren has the de-facto database maintenance scripts, that I’ve been told Microsoft uses extensively internally.

Download the MaintenanceSolutions.SQL script from here. Just run the script in SQL studio and it will create a number of maintenance jobs for you. It gets regular updates, so check back every few months for improvements. After you run the script you should see the following list of jobs under the SQL Agent.

9-26-2013 7-53-54 PM

SQL Blitz Script

I first heard Brent Ozar at TechEd a couple of years ago in a SQL performance session. His team has developed the sp_BLITZ script. It performs dozens of SQL configuration checks and warns you about potential issues. This script is also regularly updated to add new checks and support new SQL versions. Plus his site has links to many common issues that the script highlights and provide remediation guidance. You can download the script here.

After you download the script, execute it in SQL studio. You won’t see much happen, since this execution just installs the stored procedure it doesn’t actually run it. We will execute the checks a bit later. After we configure the Always-On Availability group we can run it again, for thorough coverage.

As always, defer to your professional SQL DBAs when configuring the following settings. Workloads vary wildly, so the settings below may not be the best for your environment. These are just a baseline configuration for simple low intensity databases.

Remember to run all the steps below on each of your two SQL nodes. They aren’t in a traditional failover cluster, so each node is independently configured. Just doing it once won’t cut it.

Database Integrity Checks

1. In SQL Server Management Studio locate the SQL Server Agent node. Right click on DatabaseIntegrityCheck – SYSTEM_DATABASES and select Properties.

2. On the Schedules page configure the daily job as shown below, and then click OK.

9-26-2013 8-13-26 PM

And the end result is….

9-26-2013 8-14-22 PM

Click on the notifications page and enable email and application log events. I suggest ‘when the job completes’ email alerting.

9-26-2013 8-16-19 PM

3. Repeat the same configuration process for DatabaseIntegrityCheck – USER_DATABASES but stagger the job run, say an hour later.

4. Run both jobs.

Index Optimizations

1. In SQL Server Management Studio locate the SQL Server Agent node. Right click on IndexOptimize – USER_DATABASES and select Properties.

2. On the Schedules page configure the weekly job and then click OK.

9-26-2013 8-21-42 PM

Configure the same notification settings…

9-26-2013 8-22-54 PM

Database Backups

1. Turning on backup compression is recommended. Do this by running the following SQL query.

USE master;
GO
EXEC sp_configure "backup compression default", "1";
RECONFIGURE WITH OVERRIDE;

2. In SQL Server Management Studio locate the SQL Server Agent node. Right click on DatabaseBackup – SYSTEM_DATABASES – Full and edit the job steps. Open step 1 and modify the backup path as needed. In my case I changed it to S:\Backup. This should be on the S drive, so that backups are available to each node.

9-26-2013 8-34-10 PM3. Modify the other backup scripts to change the path as well.

4. Execute the DatabaseBackup – SYSTEM_DATABASES – Full job and validate that it runs successfully.

SP_Blitz

Now that we’ve performed backups and integrity checks, its time to run Brent’s store procedure. Type exec sp_blitz and review the output at the bottom of the screen. Not all issues need to be corrected. Some are informational. Brent has a lot of detail for some issues the script finds here. I would run this on a periodic basis, and make sure you check his site for the latest version.

9-26-2013 8-49-26 PM

Summary

In this section we configured and scheduled maintenance jobs to preserve DB performance and validate their integrity. As I said before, defer to your in-house professional DBA for the recommended settings in your environment. These are just a starting point for a small environment. Next up in Part 10 we configure the Windows firewall, prep for AAG setup, then run the AAG setup wizard.

SQL 2014 Always-on AG Pt. 8: Max Mem & Email

In this installment of the SQL 2014 Always-on Availability how-to we are configuring two very important parameters: SQL server max memory and email alert notifications. Both settings are often overlooked, yet are critical to a properly performing and healthy SQL server cluster. Even if you are using a tool like SCOM or vCOPS to monitor your SQL server, I’d recommend still configuring the SQL alerting as a backup source. This way if your primary monitoring solution goes off line you have a second line of defense for getting SQL alerts.

Blog Series

SQL 2014 Always-on AG Pt. 1: Introduction
SQL 2014 Always-on AG Pt. 2: VM Deployment
SQL 2014 Always-on AG Pt. 3: Service Accounts
SQL 2014 Always-on AG Pt. 4: Node A Install
SQL 2014 Always-On AG Pt. 5: Unattended Node B
SQL 2014 Always-on AG Pt. 6: Cluster Configuration
SQL 2014 Always-on AG Pt. 7: TempDB
SQL 2014 Always-on AG Pt. 8: Max Mem & Email
SQL 2014 Always-on AG Pt. 9: SQL Maintenance
SQL 2014 Always-On AG Pt. 10: AAG Setup
SQL 2014 Always-On AG Pt. 11: File Share Witness

SQL Server Memory

SQL server is a database server and as such can use considerable memory. The more memory the SQL server process has, the more it can cache and thus improve performance and reduce IOPS. By default when SQL is installed it places no memory cap on itself and could use up nearly all the system memory, leaving precious little for the OS or other apps. This is even more of a problem if you enable large page tables since it could cause more memory starvation problems.

What’s the solution? You should always configure the SQL server’s max memory setting. This sets the upper limit on how much buffer memory the SQL server will use. By configuring this properly you ensure the OS or other local apps are not starved for memory. Set this too small and you could be “wasting” the extra memory. If you are running other services like full text search you will want to assign less memory to SQL.

When provisioning a new SQL server VM you need to be mindful of three memory settings: The guest VM’s memory size you configure in vCenter, SQL server’s maximum memory setting, and any VMware reservations that you configure. For all the gory details on SQL server memory, Brent Ozar has an outstanding post here. But to suffice it to say, if you reserve 100% of the VM’s memory at the VMware layer, then really just need to worry about the SQL maximum memory size. For a barebones SQL server (no other services like full text index, reporting services, etc.) the following table a decent starting place. I like to leave 2-4GB for the OS, and provision the rest to SQL server.

9-26-2013 5-24-58 PM

Configuring the maximum is easy through the GUI. Open Microsoft SQL Server Management Studio and open the server properties. Go to the Memory configuration page and then enter the appropriate value. Remember if you increase the VM’s memory at the VMware layer, go back and adjust the SQL Max memory size or SQL won’t use it.

9-26-2013 5-29-28 PM

E-Mail Alerting

Monitoring a SQL server is extremely important. While there are many tools to do so, you can easily configure free email alerting. In conjunction with a database maintenance script that I’ll cover later, you can easily configure daily job alerts and make sure your SQL server is not misbehaving. In this section we will merely configure SQL server to send alerts, and the next installment we will discuss DB maintenance plans.

1. In SQL Server Management Studio Expand SQL Server Agent. Right click on Operators and select New Operator.

9-26-2013 6-06-04 PM

2. Use a Name like DBA Team, and configure the E-mail name to the distribution list for your SQL server team. An example is shown below. Close the New Operator page. Note: The script at the end of this post assumes an operator name of “DBA Team”, but you can search and replace inside the script if you wish to use a different name.

9-26-2013 6-26-58 PM

3. Expand the root Management node and right click on Database Mail. Select Configure Database mail.

9-26-2013 6-09-13 PM

4. Click through the wizard until you get to the New Profile screen (you may get a warning database mail is not enabled. Just click Yes to enable it). Enter a profile name, for example Exchange. Click Add to add the SMTP configuration.

9-26-2013 6-12-31 PM

5. Configure the SMTP properties as appropriate for your network. The preferred configuration is to use SMTP over SSL with Windows integrated credentials for SMTP authentication. However, that configuration may not be possible depending on the mail server configuration. If a SQL alerts distribution does not exist, create one, and add the appropriate people to the DL. An example configuration is below. I suggest configuring the account and display name as the SQL server name, so you can readily tell which server is sending the alert. Click Next.

9-26-2013 6-14-27 PM

6. Configure the Profile Security as shown below, making your profile the default profile.

9-26-2013 6-16-56 PM

7. Click through the rest of the wizard accepting all defaults.

8. Right click on Database Mail and select Send Test E-Mail. For the recipient enter the DBA distribution list, and verify the mail was received.

9. Right click on SQL Server Agent and open the properties. Click on Alert System and configure the mail profile as shown below. Make sure you enable the profile, select the right mail profile, then enable the fail-safe operator for e-mail alerts.

9-26-2013 6-19-23 PM

10. Cut and paste the following code into SQL server management studio and execute the query. You should not get any errors. If you didn’t use “DBA Team” for the operator name, search and replace before you run the script to avoid errors. If you do goof it up you will need to delete all of the Alerts it creates (under the Alerts node in SQL Server Agent) and re-run it.

USE [msdb]
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 016',
@message_id=0,
@severity=16,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 016', @operator_name=N'DBA Team', @notification_method = 7;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 017',
@message_id=0,
@severity=17,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 017', @operator_name=N'DBA Team', @notification_method = 7;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 018',
@message_id=0,
@severity=18,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 018', @operator_name=N'DBA Team', @notification_method = 7;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 019',
@message_id=0,
@severity=19,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 019', @operator_name=N'DBA Team', @notification_method = 7;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 020',
@message_id=0,
@severity=20,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 020', @operator_name=N'DBA Team', @notification_method = 7;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 021',
@message_id=0,
@severity=21,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 021', @operator_name=N'DBA Team', @notification_method = 7;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 022',
@message_id=0,
@severity=22,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 022', @operator_name=N'DBA Team', @notification_method = 7;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 023',
@message_id=0,
@severity=23,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 023', @operator_name=N'DBA Team', @notification_method = 7;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 024',
@message_id=0,
@severity=24,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 024', @operator_name=N'DBA Team', @notification_method = 7;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 025',
@message_id=0,
@severity=25,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 025', @operator_name=N'DBA Team', @notification_method = 7;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Error Number 823',
@message_id=823,
@severity=0,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Error Number 823', @operator_name=N'DBA Team', @notification_method = 7;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Error Number 824',
@message_id=824,
@severity=0,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Error Number 824', @operator_name=N'DBA Team', @notification_method = 7;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Error Number 825',
@message_id=825,
@severity=0,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Error Number 825', @operator_name=N'DBA Team', @notification_method = 7;
GO

Summary

This is a pretty long post, but the actual steps are not that difficult. Properly configuring the SQL server max memory is very important. Likewise, you should always configure email alerts for your SQL server. The alerts in the script are best practices, but your DBA may want to tweak by adding more. Even if you are using SCOM or some other system monitoring tool, I would still urge configuring server email alerts. That way if your central monitoring tool dies or gets misconfigured, you can still know if your SQL server is having problems. Next up in Part 9 is SQL maintenance tasks.

SQL 2014 Always-on AG Pt. 7: TempDB

2014-09-22_17-13-13One of the most important aspects of SQL performance is TempDB. Applications can use TempDB as a scratch space, and in most cases you should not rely on just a single TempDB file. If the number of vCPUs is less than 8, then configure an equal number of TempDB files as you have assigned vCPUs. If the number of vCPUs exceeds 8, then start with 8 TempDB files and look for contention for in-memory allocation (PAGELATCH_XX). Scale up the number of TempDB files in increments of four until contention is eliminated.

Another important aspect of TempDB is their size and file growth. SQL will proportionally use the TempDB databases based on their size. So it’s important to create all TempDBs with the same size and not allow for autogrow. Size them properly up front to accommodate your application requirements. If you do need to grow them in the future, grow all of them equally. All TempDBs will use a single “centralized” log file, so you won’t see multiple log files.

Just to conserve disk space the script below creates four TempDB files at 5MB each. Increase all TempDB sizes equally for your environment. I’d probably go with 1-5GB per file, if the SQL server will be under light usage and only supporting an application like vCenter. A general TempDB sizing rule of thumb is 1% to 10% of your total database size. Your transaction log file size needs to be of sufficient size to hold all the transactions between backups. This is probably daily, unless you are running a large database with more frequent backups.

The script also renames the SA account, as a security protection measure. This is optional, but recommended. It also enables DAC, which can be used for remote administration during dire SQL server problems to possibly repair the situation and get SQL server back into production.

In addition to TempDB, the script makes other minor changes to system database sizes and auto-growth. I won’t bore you with all the details, but please consult your local DBA for system DB settings specific to your environment. In the absence of a DBA the settings below are a reasonable start for a small environment.

Blog Series

SQL 2014 Always-on AG Pt. 1: Introduction
SQL 2014 Always-on AG Pt. 2: VM Deployment
SQL 2014 Always-on AG Pt. 3: Service Accounts
SQL 2014 Always-on AG Pt. 4: Node A Install
SQL 2014 Always-On AG Pt. 5: Unattended Node B
SQL 2014 Always-on AG Pt. 6: Cluster Configuration
SQL 2014 Always-on AG Pt. 7: TempDB
SQL 2014 Always-on AG Pt. 8: Max Mem & Email
SQL 2014 Always-on AG Pt. 9: SQL Maintenance
SQL 2014 Always-On AG Pt. 10: AAG Setup
SQL 2014 Always-On AG Pt. 11: File Share Witness

Configure TempDB and More

1. Copy the script below to your first SQL server. To run the script open SQL Studio, create a new query (click “New Query”) then paste in the code below. Or save the script to a .sql file and double click on it. Execute the script and it should run successfully. Be sure to change the file paths to match your system. The script will fail if they are incorrect.

2014-09-22_17-19-05

 

/* Configures TempDB and other settings */
/* Derek Seaman, derekseaman.com */

/* Configure TempDB */

USE master;
GO
ALTER DATABASE tempdb
ADD FILE
(
NAME = tempdev2,
FILENAME = 'J:\TempDB\tempdb2.mdf',
SIZE = 5MB,
FILEGROWTH = 0
);

ALTER DATABASE tempdb
ADD FILE
(
NAME = tempdev3,
FILENAME = 'J:\TempDB\tempdb3.mdf',
SIZE = 5MB,
FILEGROWTH = 0
);

ALTER DATABASE tempdb
ADD FILE
(
NAME = tempdev4,
FILENAME = 'J:\TempDB\tempdb4.mdf',
SIZE = 5MB,
FILEGROWTH = 0
);

ALTER DATABASE tempdb
MODIFY FILE
(
NAME = tempdev,
SIZE = 5MB,
FILEGROWTH = 0
);

ALTER DATABASE tempdb
MODIFY FILE
(
NAME = templog,
SIZE = 2048MB,
FILEGROWTH = 256MB
);

/* Configure other system databases */

GO
ALTER DATABASE model
MODIFY FILE
(
NAME = modeldev,
SIZE = 10MB,
FILEGROWTH = 10MB
);

ALTER DATABASE model
MODIFY FILE
(
NAME = modellog,
SIZE = 10MB,
FILEGROWTH = 10MB
);
GO

ALTER DATABASE master
MODIFY FILE
(
NAME = master,
SIZE = 10MB,
FILEGROWTH = 10MB
);

ALTER DATABASE master
MODIFY FILE
(
NAME = mastlog,
SIZE = 10MB,
FILEGROWTH = 10MB
);
GO

ALTER DATABASE msdb
MODIFY FILE
(
NAME = msdbdata,
SIZE = 25MB,
FILEGROWTH = 10MB
);

ALTER DATABASE msdb
MODIFY FILE
(
NAME = msdblog,
SIZE = 25MB,
FILEGROWTH = 10MB
);
GO
/* Allow advanced configuration options */

USE master;
GO
exec sp_configure 'show advanced options', 1;
RECONFIGURE;
GO

/* Rename SA Account */

ALTER LOGIN sa WITH NAME = [Matrix];

/* Enable DAC */

EXEC sp_configure 'remote admin connections', 1
GO
RECONFIGURE
GO

2. Repeat the configuration process on the other SQL node.

Summary

TempDB may be often overlooked by non-DBA people installing SQL server. Even with VMs using just 1-2 cores I would still configure a few TempDB files. Why? If the SQL server grows and you add more DBs, vCPUs, etc. you probably won’t remember to add more TempDB files. Coming up in Part 8 we will be configuring memory and email alerting. Then yes finally configuring AlwaysOn Availability Groups.

SQL 2014 Always-On Pt. 6: Cluster Config

2014-09-20_21-10-48 We’ve now come to the point where we need to install and configure the Windows cluster role. Even though SQL AAGs don’t use shared disks like a traditional failover cluster, AAGs do rely on the Windows cluster service. We will also be implementing the Cluster Aware updating feature. This is a feature new to Windows Server 2012 that does rolling Windows patch upgrades to each node, while keeping the clustered service up. This is a very handy feature. If you have an enterprise patch management tool which is cluster aware, then you could skip that section and rely on that tool instead.

Blog Series

SQL 2014 Always-on AG Pt. 1: Introduction
SQL 2014 Always-on AG Pt. 2: VM Deployment
SQL 2014 Always-on AG Pt. 3: Service Accounts
SQL 2014 Always-on AG Pt. 4: Node A Install
SQL 2014 Always-On AG Pt. 5: Unattended Node B
SQL 2014 Always-on AG Pt. 6: Cluster Configuration
SQL 2014 Always-on AG Pt. 7: TempDB
SQL 2014 Always-on AG Pt. 8: Max Mem & Email
SQL 2014 Always-on AG Pt. 9: SQL Maintenance
SQL 2014 Always-On AG Pt. 10: AAG Setup
SQL 2014 Always-On AG Pt. 11: File Share Witness

Cluster Role Installation

1. From the Server Manager dashboard launch Add Roles and Features.

2. Select Role-Based or Feature-Based Installation.

3. Select the local server when prompted.

4. Skip the Server Roles page and on the Features page check Failover Clustering. When prompted, add the required features.

6. Continue through the rest of the wizard and wait for the installation process to complete.

7. Install Windows Failover Clustering on the second SQL node.

Cluster Validation

1. On Node A launch the Failover Cluster Manager, and in the left pane right click on Failover Cluster Manager and select Validate Configuration.

2. Enter the two hostnames of the SQL servers you are configuring.  Run all of the tests. Review the cluster report for any errors. Some warnings are normal, such as:

  • Only one pair of network interfaces
  • No disks to perform cluster validation on (and all associated disk tests)

3. Download and install Windows8-RT-KB2803748.x64.msu on both nodes if you are running Windows Server 2012 (not 2012 R2).

Cluster Configuration

1. Launch the Failover Cluster Manager. Right click on Failover Cluster Manager and select Create Cluster.

2. Add the two nodes of the cluster and click Next.

3. Enter a cluster name (e.g. SDSQL03) and the IP address of the cluster name. Click Next.

2014-09-20_20-36-42

4. Validate the cluster information is correct and click Next. Un-check the box to add all storage devices. Wait for the cluster to be created.

5. Verify the cluster was successfully formed. You can safely ignore warnings about no disks being found and regarding an even number of nodes.

6. Verify all of your drives are still visible. If they have disappeared, then go into the cluster manager and remove all drives as a cluster resource. Then open the Computer Manager and online each of the drives. Reboot the SQL server if you had to do this procedure, so SQL can startup normally.

6. Open the DNS Manager console, locate the cluster hostname (e.g. SDSQL03) in the forward lookup zone, open the record, and uncheck the box to update the pointer record. Re-check the box to create a pointer record. Verify a pointer record was created.

Cluster Aware Updating Configuration

1. In ADUC create a computer object with cluster name with a –CAU suffix (e.g. SDSQL03-CAU), in the appropriate server OU. Change the description on the object to SQL Cluster SDSQL03 Cluster Aware Updating Object.

2. Modify the security permissions on the CAU computer object to give the cluster object (e.g. SDSQL03) full control.

2014-09-20_20-55-48

3. On each node open the Windows Firewall manager and enable the two remote shutdown rules shown below.

2014-09-20_21-00-32

4. In Server Manager from the Tools menu select Cluster-Aware Updating.

5. Enter the cluster name (e.g. SDSQL03) and connect.

6. In the right pane select Analyze cluster update readiness.

7. Verify that all results show Passed, except for the machine proxy and the CAU role, which can be ignored. Close the results windows.

8. In the right pane select Configure cluster self-updating options.

9. Check the box to add the CAU role.

10. Check the box next to I have prestaged computer object for the CAU clustered role. Enter the CAU computer object name you created (e.g. SDSQL03-CAU). Click Next.

2014-09-20_21-05-25

11. Configure the schedule as determined by your environment.

12. On the Advanced Options screen check the box to require all nodes online.

13. If desired, on the Additional Options screen check the box to install recommended updates.

14. Click through the remainder of the wizard and wait for the installation to complete.

15. Re-run the Analyze cluster updating readiness. Verify everything passes, except for the proxy warning.

Summary

In this installment we’ve installed and configured the Windows failover clustering role. We’ve also optionally configured Windows cluster aware updating, which makes patching life easier. In the Part 7 we will configure TempDB. Click here for that installment.

© 2014 - Sitemap