Enable SQL SSL with low-privileged service account

One of the neat security features with SQL 2005 and later is the ability to use a SSL certificate to encrypt off-host SQL server communications over port 1433. Encrypting communications between your SQL server and your remote applications is strongly recommended. Do you really want credit card data, personal information or sensitive data traversing the network in clear text? Probably not. Yes you could use IPsec between your SQL server and applications, but that’s not for the faint of heart. But once you know the trick to making SSL work with SQL, it’s a no brainer.

First, your computer needs a server authentication certificate with an allowed enhanced key usage of 1.3.6.1.5.5.7.3.1. For simplicity the domains I manage have an auto-enroll policy that doles out machine certificates to all computers. If you don’t know how to create a machine certificate and don’t have auto-enrollment enabled, I’ll leave that as an exercise for the reader to explore. Assuming you have a valid server authentication certificate, we can continue.

SQL best practices urge that the SQL database engine, and agent service, run under a non-privileged service accounts. Should your SQL server get compromised, it can help limit the damage to the underlying operating system. But if you pick a machine certificate within SQL Configuration manager to use, SQL will fail to start.

The failure message, on Windows Server 2008 R2 with SQL Server 2008 R2 is: Event ID 26014, source MSSQLSERVER. Content of the error is:

Unable to load user-specified certificate [Cert Hash(sha1) “A3B….913C”]. The server will not accept a connection. You should verify that the certificate is correctly installed. See “Configuring Certificate for Use by SSL” in Books Online.

Hmm…why is that? Well the answer is simple, and the fix is simple as well. The non-privileged service account is attempting to read the private key for the selected certificate and is unable to do so. The solution is to enable the SQL database engine service account read access to the private machine certificate key.

How do you do that? Glad you asked!

1. Open a blank MMC and add the Certificates snap-in. Chose the Computer account.

2. Open the Personal Certificates store and you need to see one certificate with the FQDN of your SQL server. If not, then no SSL for you!

3. Right click on the certificate and select All Tasks, Manage Private Keys.

4. Click Add and locate the SQL database engine service account. Change the permissions to Read only.

5. Click OK.

6. Open the SQL Server Configuration Manager and expand SQL Server Network Configuration. Open the properties of Protocols for MSSQLSERVER.

7. Change Force Encryption to Yes. Click on the Certificate tab and from the drop down select the certificate that is listed. It should be the same one you changed the permissions on from the previous step.

8. Close all windows and restart the SQL service. With any luck the SQL services will start.

9. To verify SQL is using the PKI certificate, check the Windows Application log for event ID 26013. It should read something like:

The certificate [Cert Hash(sha1) “A3B2…7913C”] was successfully loaded for encryption.

And there you go! To recap, we assigned an existing machine certificate to the SQL service, changed the permissions on the private key, and restarted the SQL service. Pretty easy once you know the permissions trick.

If you are really paranoid, whip out Network Monitor 3.4 (not Wireshark) and capture a trace during a connection attempt to the SQL server. Under the protocol column you should see a TLS session between the SQL server and your application server.

I noticed that it appears the SQL client that comes with SQL Server 2008 R2 does NOT attempt to negotiate a TLS 1.2 session, like IIS 7.5 and IE8 on Windows 7 does as mentioned in my prior blog.

SQL 2008 R2: Generate trusted TDE certificate

As previously mentioned in my blog about SQL TDE (Transparent Data Encryption), the example script I gave just used a SQL self-signed certificate to encrypt the database. While this is fine for a demo, you should only used trusted certificates in a production environment.

Getting a trusted certificated inserted into SQL 2008 R2 is easier than it sounds and took quite a bit of digging. BUT, there is a way and it’s not too terrible. Plus, this method can be used with commercial certificate authorities or an internal CA of your choice. It does not rely on a Microsoft CA, but works perfectly fine with one.

1. Download OpenSSL 1.x and install it. Do not use v0.9.x releases as they won’t work.

2. Open a command prompt and type: openssl.exe genrsa 2048 > private.key

3. Type: openssl.exe req -new -key private.key > certificate.csr

4. You will be prompted with a series of questions. Input data as you see fit, but pay attention to the “Common Name”. This will be the subject of the certificate.

5. Open the certificate.csr file and submit it to your favorite certificate authority. It could be a commercial or internal CA. Save the resulting certificate as a DER (not BASE64) file, let’s say certificate.cer.

6. Type: openssl rsa -in private.key –outform PVKpvk-strong -out private.pvk
-You will be prompted to type a password to protect the private key. Remember the password.

7. Open SQL Management studio and create a new query. Cut and paste the following query, adjusting the paths, filenames and password from step #6 you used. You can change “My_New_Certificate” to any name you wish. Probably best to use the common name you input during the certificate request.

—-
CREATE CERTIFICATE My_New_Certificate
FROM FILE = ‘D:certificate.cer
WITH PRIVATE KEY (FILE = ‘D:private.pvk‘,
DECRYPTION BY PASSWORD = ‘MyPassword‘);
GO
—-

8. Press the Execute button and you should get “command(s) completed successfully.” If not, triple check your paths, filenames, and password. The error messages are not helpful if you get it wrong.

9. To verify the certificate was actually installed and to view all the other certificates, in SQL Server Studio execute this query:

—-
use master
go
select * from sys.certificates
go
—-

And there you have it! You can now refer to back to my TDE blog post and change the sample script to use this new trusted certificate instead of the self-signed “RMSServerCert“. You should backup the two certificates you imported into SQL and delete all copies on the local hard disk. If you ever need to restore your encrypted database you MUST have these two certificate files…no certificates..no data!

Happy encrypting!

SQL 2008 R2: Transparent Data Encryption (TDE) Example

One of the new features in SQL Server 2008/R2 is Transparent Database Encryption, or TDE. TDE lets you encrypt any database, without having to change your application. This means you can fully encrypt databases and log files for SharePoint, RMS, or anything else you wish. For the ultimate in security you can use Microsoft Exensible Key Management (EKM) to use a hardware security module (HSM) to store the private keys off-host.

The Thales nShield Connect is an excellent example of a FIPS 140-2 Level 3 and Common Criteria EAL4+ certified HSM that is compatible with SQL 2008 EKM and TDE. (Try saying that ten times fast!) Their SQL 2008 EKM brochure is here.

But for mere mortals that want to encrypt a database or two and don’t want to spend $50,000 or more for a HSM, the script below shows you how to encrypt the three Active Directory Rights Mangement Services (AD RMS) databases. Just change the database names, and you are good to go. There’s nothing unique here about RMS; just an example I worked on today. The certificate used in this example is only a self-signed, so for better security I’d recommend you use a trusted certificate. I’ll save how to do that for a future post here!

Just run the script in SQL Studio, and you should be good to go. Of course database encryption adds additional overhead, and you can kiss database compression good bye as well. So be careful what you encrypt and monitor database performance.

—-
/* Configures SQL 2008/R2 transparent database encryption.
Version 1.1, 7 July 2010 */

/* Configures master database encryption key and certificate */

USE master;
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘Str0ngPA$$w0rd’;
GO
CREATE CERTIFICATE RMSServerCert WITH SUBJECT = ‘My TDE Certificate’;
GO

/* Encrypts RMS Configuration Database */

USE DRMS_Config_RMS_Root_contoso_net_443
GO
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE RMSServerCert;
GO
ALTER DATABASE DRMS_Config_RMS_Root_contoso_net_443
SET ENCRYPTION ON;
GO

/* Encrypts RMS Directory Services Database */

USE DRMS_DirectoryServices_RMS_Root_contoso_net_443
GO
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE RMSServerCert;
GO
ALTER DATABASE DRMS_DirectoryServices_RMS_Root_contoso_net_443
SET ENCRYPTION ON;
GO

/* Encrypts RMS Logging Database */

USE DRMS_Logging_RMS_Root_contoso_net_443
GO
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE RMSServerCert;
GO
ALTER DATABASE DRMS_Logging_RMS_Root_contoso_net_443
SET ENCRYPTION ON;
GO
—-

DAT401: SQL 2008 HA/DR case studies

This session talked about several proven methods of high availability and disaster recovery for SQL 2008. It focused on several case studies of real-word companies, their HA/DR approach, and metrics from their environments. It didn’t cover any new wizbang technology or third party products. With the proper design, processes, procedures, and highly skilled people, it’s really mind boggling what companies have done.

There are a few common HA/DR architectures:

Failover clustering for HA and database mirroring for DR
– Synchronous database mirroring for HA/DR and log shipping for additional DR
– Geo-clustering for HA/DR and log shipping for additional DR
Failover clustering for HA and SAN-based replication for DR
– Peer-to-Peer replication for HA and DR

Each architecture has its own pros and cons. Your business requirements will determine which solutions you will want to employ. The remainder of the session was discussing various case studies.

One case study, bWin, is an online gambling company in Europe. They process 1 million bets a day on over 90 sports. Their SLA is zero data loss, 99.99% availability 24×7, and they have an unlimited IT budget (no kidding). Their design had to take into account a full datacenter failure and complete data loss within that datacenter. Total data is in excess of 100TB, 100 SQL instances, and the environment processes over 450K SQL statements per second.

Their solution, which is highly complex with extreme levels of redundancy, has enabled them ZERO downtime in three years, zero data loss, and near 100% verified data availability. Their backup and storage architectures are really mind blowing. It is well worth reading the case study, here. If you want to read about their backup architecture, you can find the case study here. They can backup a 2TB database in 36 minutes. People were starting to laugh in the session at the extreme lengths this company went to ensure verified zero data loss.

The key take way from this case study is that you need to document everything, have processes and procedures in place for every scenario, and have extremely highly skilled people. The technology is just one small piece of the entire design. It’s really the processes and people that enable these extreme levels of up time and data availability. You can have all the technology in the world but if your document is poor and you don’t have extremely highly skilled people, you will end up in a world of hurt and miss your SLA targets.

Another case study was ServiceU. In summary, they were able to upgrade from SQL 2005 to SQL 2008, Windows Server 2003 to 2008, a new SAN, and new server hardware, with less than 16 minutes of total downtime. This was accomplished without any virtualization product and through careful planning and orchestrating of the upgrades.

Other case studies include QR Limited, Progressive Insurance, and an Asian travel company. Bottom line is that SQL can provide highly robust HA/DR if you have the right architecture, documentation, processes, and highly skilled people.

SQL 2008 Security and vCenter 4

This past weekend I worked on automating our SQL installs and scripting security lockdowns. During that effort I found the new policy evaluation tool in SQL 2008. This built-in tool allows you to run specific checks against your SQL 2008 installation to report on best practices. During my evaluation everything passed with a green light except “Public not granted server permissions“. Since I had no clue what that meant, a little googling come up with this blog.

After implementing both fixes described in the blog, vCenter died and was unable to connect to my SQL instance. Security is great, but not when it kills your missions critical applications. After some experimentation, I was able to find a combination of the lockdown commands that added some security yet allowed vCenter to operate.

If you read the blog I linked to above, you will see that I was able to execute all of the commands except the one denying public connections over TCP. Since vCenter uses TCP to connect to SQL, it makes sense that I might need to allow public connections over TCP. I was surprised that I was able to deny public the view role to any database and vCenter worked fine, but preventing the TCP connection was bad juju.

DENY VIEW ANY DATABASE to PUBLIC
DENY CONNECT ON ENDPOINT::[TSQL Named Pipes] to public
DENY CONNECT ON ENDPOINT::[TSQL Local Machine] to public
DENY CONNECT ON ENDPOINT::[TSQL Default VIA] to public

Unfortunately even after these lockdowns the SQL policy evaluation tool still failed me on “Public not granted server permissions.” Only when I locked down everything including TCP did I get a green check. Now how much actual security these commands provides, I have no idea. But given that vCenter still works, I figure the less attack surface area the better.

I also found this great blog with lots of SQL scripts for really putting the screws on SQL and hardening it even further. From some quick testing I did with vCenter 4.0, you can safely execute the server wide scripts with one modification: Enable the SQL Agent XPs on line 61. The database lockdown ran fine as-is on my vCenter server database.

Don’t take my word for it that these lockdowns won’t adversely affect your vCenter. So test, test, test!!!

Scripting SQL Server Firewall Rules

On servers that are running Windows server 2008 or later, you can take advantage of the built-in two-way firewall. SQL server firewall rules can be pretty easily created through the GUI, but it can be a bit tedious. So I wrote a little batch file that asks you for an IP address and then opens the SQL port such that it only accepts connections from that IP address.

You can of course extend this to any other program just by modifying the switches, protocols, IPs, etc. It will error out if you don’t supply an IP address, so that you don’t get a meaningless rule. A new rule is created each time you run the command..it doesn’t update an existing rule. That is possible with the netsh command though.

@echo off
:: Configures Windows Server 2008/R2 firewall for SQL.
:: Requires a single argument, the IP address of the remote application server that requires SQL access.
:: Usage: SQL-Firewall.cmd

if [%1]==[] ; GOTO :ERROR
Echo Configuring Windows Advanced Firewall for SQL to listen on IP %1
netsh advfirewall firewall add rule name="SQL Server (TCP-in)" dir=in action=allow protocol=TCP Profile=domain localport=1433 program="D:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn\sqlservr.exe" description="Allows inbound Microsoft SQL connections." remoteip=%1
Exit /B
:ERROR
Echo Please specify IP address.

Automate vCenter/VUM Database Creation Process

After installing and re-installing VMware vCenter and VUM many times (mostly for testing) all of the manual database creation steps got old pretty fast. So I put some effort into creating a T-SQL script for SQL 2008 that automates most of the tasks.

The T-SQL script below does the following:

1. Adds an existing domain service account for vCenter to SQL. Change the value in the SET command.
2. Grants this account db_owner rights on msdb. Required for vCenter/VUM installation.
3. Creates separate vCenter and VUM databases. Feel free to change their sizes.
4. Sets the ownership of the new databases to the service account.

It’s up to the installer to remove the db_owner rights on the MSDB database after you get done installing vCenter and VUM. I apologize for the line wrapping and weird spacing. It might take a bit of futzing to get it all looking pretty. Of course you will need to tweak the paths and any other information as required.

DECLARE @login_name NVARCHAR(50)
SET @login_name=’contososvc-020-VCTR01′
EXEC(‘CREATE LOGIN [‘+@login_name + ‘]FROM WINDOWS’)

USE MSDB

EXEC sp_grantdbaccess @login_name
EXEC sp_addrolemember db_owner, @login_name

USE master
create database “vCenter Server”
on
( name = ‘vCenter Server’,
filename = ‘K:DataMSSQLvCenter_server.mdf’,
size = 5000MB,
filegrowth = 250MB )

log on
( name = ‘vCenter Server log’,
filename = ‘L:LogsvCenter_server.ldf’,
size = 200MB,
filegrowth = 20MB )

COLLATE SQL_Latin1_General_CP1_CI_AS;

create database “vCenter Update Manager”
on
( name = ‘vCenter Update Manager’,
filename = ‘K:DatavCenter_Update_Manager.mdf’,
size = 250MB,
filegrowth = 25MB )

log on
( name = ‘vCenter Update Manger log’,
filename = ‘L:LogsvCenter_Update_Manager.ldf’,
size = 25MB,
filegrowth = 2MB )

COLLATE SQL_Latin1_General_CP1_CI_AS;

EXEC(‘ALTER AUTHORIZATION ON DATABASE::”vCenter server” TO [‘+@login_name + ‘]’)

EXEC(‘ALTER AUTHORIZATION ON DATABASE::”vCenter Update Manager” TO [‘+@login_name + ‘]’)

GO

Make SQL 2005/2008 Thin Provisioning Friendly

Today I was reading a whitepaper by 3PAR regarding SQL 2008 performance on their storage arrays. It’s an interesting comparison of wide striping, RAID levels, mixed workloads, and number of disks. While browsing through the document I stumbled upon a SQL 2005/2008 feature which I was unaware of, but is important in thin provisioned environments.

Typically with SQL 2005/2008 when you create a database, say 100GB, SQL creates the entire file and zeros out all of the contents. Depending on the database size, this can take a long time. While this is happening, your storage array sees all of this data being written and allocates storage to that volume. Even though the database is empty, since zeros were written the array thinks it is holding data. Clearly, this defeats thin provisioning.

3PAR mentioned a feature in SQL 2005/2008 called Instant File Initialization. If you enable this feature, when you create a new database no zeros are written so creation is nearly instant! Imagine that. This is thin provisioning friendly, since virtually no data is written until the application needs to store data. So if you create a 100GB SharePoint database, your storage array will only allocate storage as SharePoint fills up the database.

Pretty nifty! So how do you enable this? According to the SQL 2005 security best practices whitepaper, you should grant the group SQLServer2005MSSQLUser$MachineName$MSSQLSERVER “Perform Volume Maintenance tasks” user right on the SQL server. This can be done via GPO, or editing the machine’s local security policy. Be sure to restart the SQL services, so it knows you granted this right. If you are using SQL 2008, then group is called SQLServerMSSQLUser$MachineName$MSSQLSERVER.

Now when you create a 100GB database, it could literally take a couple of seconds instead of many minutes. Even if your storage array isn’t thin provisioning enabled, if you are using VMware vSphere 4.0 and thin provisioned virtual disks, you can gain the same benefit.

The feature works on standard as well as enterprise edition versions of SQL, and on Server 2003 and later. Log files are not instantly created and will be fully zeroized. But your log files are generally just 10 to 20 percent of the total database size, so you still save a lot of space.

Nothing is free in life, so there may be a slight performance penalty when new data is written to the database since space was not previously allocated. Microsoft discusses the feature here, and encourages using this feature to enhance performance.

Reluctant to virtualize SQL 2008? Don’t be!

One of the best sessions to date of VMworld was discussing virtualizing SQL 2008 in vSphere. VMware has written a lengthy whitepaper going over the gory details. In short, you will likely get 86% or better of bare metal performance if you virtualize SQL with vSphere, even for the most demanding workloads. This is in stark contrast with prior versions of ESX which has much more of a performance hit.

However, to achieve that level of performance careful planning and some tweaking of the environment is needed, along with the latest processors from Intel and AMD. Storage performance is key so following your storage array vendor’s best practices for physical SQL servers is still recommended. In addition to storage best practices, other tweaks are recommended as well. These include:

– Modify SQL to use fixed memory allocation
– Create a memory reservation in ESX for the SQL VM
– Use separate vSCSI devices for OS, database and log drives
– Use paravirtualized SCSI adapter for the database and log drives (can’t use for OS)
– Use large 2MB pages for SQL, specially on 64-bit systems
– Create DB/Log vmdk disks with thickeagerzeroed parameter
– Set maximum queue depth on HBA
– Make sure HW assisted virtualization is enabled on the VM
– Enable vmxnet TX coalescing
– Disable SQL server statistics
– Use fiber-mode Windows scheduling
– Use RAW LUNs over NTFS

Performing all of these tweaks can boost SQL performance 20% or more over a plain vanilla installation. In short, vSphere now lets customers virtualize SQL and gain all the benefits of virtualization with very little performance penality. Remember with Server 2008/R2 you can perform hot memory add (enterprise edition), and hot CPU addition (datacenter) as well.

For a good whitepaper on general vSphere performance tweaks, see this link.

Slipstream SQL 2008 SP1

One new feature of SQL 2008 SP1 which I didn’t know about was the ability to slipstream a service pack right into the installation routine, like you can with operating systems.

This will make deployment quicker, faster, and less error prone. The full set of instructions can be found at this MSDN blog.