SIA338: Authentication & Passwords, The Good, The Bad & The Really Ugly

One of the sessions that I attended today was by the very popular Marcus Murray. He’s a Swedish security expert and Microsoft security MVP. I heard him speak a couple of years ago at TechEd Orlando and was really blown away by the content and his presentation style. Thankfully Microsoft let him come back and do another session this year. Some of the content was from his Orlando session, but also included some great new content. You can actually download his whole presentation from his blog. Since you can access the presentation, I’ll just summarize a few of his attack demos:

Pass the Hash. This was one of the demos he did in Orlando, which scared the heck out of everyone. Although it’s not a new attack vector, the results are scary. In a nut shell, this is how it works. Whenever someone, such as a domain administrator, logs onto a Windows machine their password hash gets stored on the computer. If another user or attacker can get local admin access, there are tools which can dump that hash. You can then take that hash (with another tool) and impersonate that use to other servers on the network, compromising them. If you can grab a domain admin account hash, you have instantly 0wn3d the domain. If you can access a domain controller you can now dump the hashes for everyone in the entire domain and you can impersonate anyone at will.

After TechEd Orlando I went through the entire exercise in my home lab and was able to duplicate the results and escalate from a local workstation administrator to a domain admin in less than 30 seconds. If you combine these tools with a web-based attack and local privilege escalation techniques, an attacker could remotely 0wn your entire corporate domain in a matter of seconds. Think about that for a little bit! No password cracking required, and password length is irrelevant.

Passing the Dutchie. This is similar to pass the hash, but the attacker uses an untrusted computer that is not part of the domain. The attack works by using a man-in-the-middle attack using an SMB relay to use NTLM credentials to access resources on domain joined computers.

Computer Account default password. Sometimes deployment tools or other methods of creating a computer account in AD improperly configure the password and set it to the name of the computer. If for some reason the password doesn’t get changed (machine never joined to the domain, etc.) this computer account can be used to access network resources. Marcus said 95% of his clients have this problem with at least one computer account in AD. So chances are your network is vulnerable to this attack!

Passing the Google Cookie. In this attack Marcus snatched Google docs cookies from the network (WiFi or wired), and injected them into IE and was able to impersonate the user and access their documents. This is similar to pass the hash, but it used cookies and the Google docs web site. Google doesn’t send their cookies over SSL, so beware! You can see a video of this attack on the his blog link I provided above.


One of the new tidbits of information that was brand new to me is the concept of Authentication Assurance in Windows Server 2008 R2 using smart cards. In a nut shell this is a new feature in Windows Server 2008 R2 that lets Windows dynamically add you to a security group if you logon with smartcard using a particular certificate template. If you use this security group to ACL resources, such as a file share, then attacks like pass the hash and pass the dutche are foiled since a smartcard was not used for authentication and you aren’t a member of the special security group. This security group could be extended to IPsec to only allow users that authenticate with a smartcard to access any resources on a particular computer.

Another countermeasure that he mentioned were some new settings in Windows 7 and Server 2008 R2 that can audit and even eliminate the use of NTLM on the network. There are some new group policy settings that you can use to block all NTLM authentication. This is easier said than done since some applications or methods of accessing resources don’t work with Kerberos, such as accessing a file share via IP address vice a hostname. Even if you can’t fully disable NTLM for all use cases, you can at least start auditing to get a handle on how widely its used on your network. This is one reason why I always try to Kerberos enable ANY application on the network, such as SQL, SharePoint, IIS, vCenter, etc.

DAT401: SQL 2008 HA/DR case studies

This session talked about several proven methods of high availability and disaster recovery for SQL 2008. It focused on several case studies of real-word companies, their HA/DR approach, and metrics from their environments. It didn’t cover any new wizbang technology or third party products. With the proper design, processes, procedures, and highly skilled people, it’s really mind boggling what companies have done.

There are a few common HA/DR architectures:

Failover clustering for HA and database mirroring for DR
– Synchronous database mirroring for HA/DR and log shipping for additional DR
– Geo-clustering for HA/DR and log shipping for additional DR
Failover clustering for HA and SAN-based replication for DR
– Peer-to-Peer replication for HA and DR

Each architecture has its own pros and cons. Your business requirements will determine which solutions you will want to employ. The remainder of the session was discussing various case studies.

One case study, bWin, is an online gambling company in Europe. They process 1 million bets a day on over 90 sports. Their SLA is zero data loss, 99.99% availability 24×7, and they have an unlimited IT budget (no kidding). Their design had to take into account a full datacenter failure and complete data loss within that datacenter. Total data is in excess of 100TB, 100 SQL instances, and the environment processes over 450K SQL statements per second.

Their solution, which is highly complex with extreme levels of redundancy, has enabled them ZERO downtime in three years, zero data loss, and near 100% verified data availability. Their backup and storage architectures are really mind blowing. It is well worth reading the case study, here. If you want to read about their backup architecture, you can find the case study here. They can backup a 2TB database in 36 minutes. People were starting to laugh in the session at the extreme lengths this company went to ensure verified zero data loss.

The key take way from this case study is that you need to document everything, have processes and procedures in place for every scenario, and have extremely highly skilled people. The technology is just one small piece of the entire design. It’s really the processes and people that enable these extreme levels of up time and data availability. You can have all the technology in the world but if your document is poor and you don’t have extremely highly skilled people, you will end up in a world of hurt and miss your SLA targets.

Another case study was ServiceU. In summary, they were able to upgrade from SQL 2005 to SQL 2008, Windows Server 2003 to 2008, a new SAN, and new server hardware, with less than 16 minutes of total downtime. This was accomplished without any virtualization product and through careful planning and orchestrating of the upgrades.

Other case studies include QR Limited, Progressive Insurance, and an Asian travel company. Bottom line is that SQL can provide highly robust HA/DR if you have the right architecture, documentation, processes, and highly skilled people.

SIA335: Death of security: Breached security, stolen data and IP espionage

This was a KILLER session by Laura Chappell, who is the author of Wireshark Network Analysis. I’ve never heard her present before, but she really rivals Mark Minasi in every way. Her presentation was filled with great tidbits of information including:

– Full duplex taps – These are much preferred over using spanning/mirroring ports on switches. Why? Spanning or mirrored ports will not pass MAC level malformed packets such as ones with CRC errors. A full duplex tap passes bit-for-bit data including any MAC errors. Best taps are by Netoptics.

WireShark has a Geomapping function so you can see where a particular IP address is located in the world.

– Join HTCIA before your company or organization suffers a security breach. Become friendly with your local law enforcement people so you have contacts WHEN (not if) you are compromised.

– If you want to use a Netbook for Wireshark analysis, be extremely careful which model you chose. Most have special NICs that do NOT operate in promiscuous mode. The Asus Eee PC900 works perfectly with WireShark.

Hurricane Search is a great tool for searching all the contents of the files on your computer for specific strings.

– Check out ettercap for creating man-in-the-middle attacks.

– On your home computer run a WireShark trace for a full 24 hours. Perform a careful traffic analysis to determine if there is any unusual traffic. Anti-virus software can be fooled pretty easily by malware, so just because you have a quality AV program doesn’t mean your machine is secure or hasn’t been compromised.

Macof can be used to flood a switch’s MAC address table to turn it into a hub.

– The price of your personal information (DOB, SSN, mother’s maiden name, etc.) is approximately $3 on the black market. Your credit card number, security code, and billing address can go for as little as ten CENTS.

– Companies are being held hostage by hackers and are holding for ransom their intellectual property. In fact, some companies feel it is better to pay $10,000 a month or more to a hacker as insurance against them being hacked by that person. Yes, really!

– Be familiar with chain of custody, evidence preservation, and what to do when you realize you’ve been compromised. This is not IF, but WHEN. In fact, you may only detect sloppy hacking attempts. The good hackers go undetected. So you may already be compromised today but have no clue!

– You can download a plethora of Wireshark captures, filters, and other goodies that Laura has put together here. Even if you don’t buy her book, check out these downloads!

– Everyone should learn to use nmap and perform regular authorized scans on their networks.

She also had a lot of hacker stories and real-word examples of compromised systems and stolen data. She’s also working on a book called “Calling Tech support” that is a collection of extremely funny stories about people calling various tech support numbers (airlines, computer companies, etc.), asking off the wall questions, and getting back really weird responses. Look for it to come out later this year.

If you attended TechED 2010 North America, I urge you to download the recorded session and listen to it. The slide deck doesn’t begin to cover the material, so listen to the whole 75 minute presentation. You will get some great laughs and learn some great information.

UNC305: OCS 14 Setup and Deployment

This was a fairly dry session covering the setup and deployment of OCS 14. I won’t go through all of the somewhat boring details, but will cover a few juicy tidbits that I learned.

– The Setup of OCS 14 is a completely new experience, 100% different than any previous version of OCS/LCS.

– The presence database is extremely sensitive to performance issues and it should be accessed entirely from memory on the SQL server. If the database is not entirely in memory users can experience delays in presence updates.

– Front-end and back-end servers should have two NICs, at least 1Gb in speed. This is to separate OCS traffic from general traffic such as RDP and backups.

OCS 14 supports Server 2008 R2 read-only DCs, but doesn’t gain any benefit from them.

– Service accounts have been eliminated. OCS services now run as “network service”. This breaks Kerberos authentication, so if you need to use Kerberos instead of NTLM then you need to do some manual configuration steps.

– There is a tool called the topology builder that asks you a lengthy series of questions about what your OCS topology will look like, roles, hostnames, SIP domains, etc. It uses this information to build you a visual diagram of what your OCS topology will look like. It also asks capacity questions, and level of redundancy required. You then export this information into another tool that creates a configuration database. When you go and setup the additional OCS servers it uses this topology information to pretty much automate the remaining OCS server installs. This whole concept of pre-building a topology and then automating server installs is very slick and the first time I’ve ever seen this method. Maybe future versions of Exchange or SharePoint will use this method?

– There a new wizard for configuring OCS certificates. OCS certificates are always very tricky given all of the hosts, pool names, hardware load balancer names, etc. So this wizard asks you about all of these details, pre-populates many values, then can submit the request online to a CA or create a certificate request file for offline usage. Very slick and should ease a lot of problems with certificates.


Although the presenter was showing beta code, the setup seemed fairly straight forward and radically different than any other Microsoft product. I look forward to trying out a beta to see how well it really works.

SIA311: RMS in Server 2008 R2 and beyond

This session covered some new cool features of RMS when combined with Windows Server 2008 R2 and Exchange 2010. Some of the new features include:

– AD RMS bulk protection tool. This tool can bulk encrypt and decrypt Microsoft Office files and attachments within PSTs. The tool can be extended to other file formats (like PDF) with third party IRM protectors. For example, FoxIt makes a PDF protector.

– Windows Server 2008 R2 has a new feature called the File Classification Infrastructure (FCI). This service is highly customizable and searches file contents and can perform almost any action, including protecting a file with RMS. For example, you can setup a regular expression to search for credit card number looking strings and automatically apply a policy. If you want near real-time protection, see this blog post. Titus labs has some additional FCI add-ons you can find here.

RMS can now be deployed and managed by PowerShell.

– The presenter covered several Exchange 2010 integration points, nearly all of which I’ve covered in other blogs this week. A few that I didn’t mention was the ability to mark a voice mail as private, which prevents you from forwarding it to anyone else. RMS integration with OWA supports IE, FireFox, Safari and Chrome. Exchange 2010 SP1 will provide the ability to preview RMS protected attachments in OWA. There are also enhancements with cross premises IRM support for Exchange online and the Microsoft Federation Gateway.

– The next version of Mac Office (probably 2011) will provide full support for RMS protected documents, templates , and emails. No firm release date, but probably next year.

– The speaker also mentioned a company, Gigatrust, which enhances RMS to support additional file types.


The most powerful features of RMS on Server 2008 R2 is the file classification tool. Microsoft has partnered with RSA to provide a RMS integration solution for data loss prevention (DLP). See more information here.

If you are tired of the constant security problems with Adobe Reader and Acrobat, I was very pleased to hear Foxit has full RMS support via their PDF Security Suite. Personally I’m tired of the Adobe bloat, nearly weekly security problems with their products, and very poor cumulative patching mechanism.

MGT310: Microsoft Service Manager 2010 Architecture and Deployment

Released a little over a week ago, Microsoft Systems Center Service Manager, is a product that ties together Configuration Manager, Operations Manager, helpdesk ticking functionality, change control tracking, and configuration change auditing. Since the product is literally brand new, not many people know what it is. There’s a session tomorrow that covers all the features, so at this point I don’t know a lot about it.

However, the session today covered the installation, basic configuration, and touched on a few features. Here are a few highlights:

– Provides a user self-service portal for opening help tickets, monitoring the status of tickets, and looking in knowledge databases.

– Provides direct connectors to SCCM, SCOM, and AD. It’s database is based on the SCOM schema, and pulls in a lot of asset, configuration, and operational data from SCCM and SCOM.

– Can automatically create tickets from SCOM alerts, and send e-mail alerts.

– Topology can scale to very large environments, up to 50,000 users.

– The ‘engine’ is based on management packs, exactly like the ones SCOM uses. This makes the product very extensible and customizable. Dozens and dozens of MPs come out of the box.

– Like Exchange 2010 and OCS 14, administration is based on a RBAC model for granular delegation. The delegation wizard has a ton of options for really scoping a role, and their permissions. For example, a helpdesk person could be limited to only see assets and tickets for a particular set of users or servers.

– Tight PowerShell integration.

– General Mills was an early adopter and configured the product in two hours, and they said it had the tight integration with SCCM and SCOM that hey hadn’t gotten in 14 years with Remedy.

– The product can be easily customizable using wizards. Additional fields, object types, and roles can be added.

– Later this year Microsoft will be releasing a compliance management suite.

– The product is easily extensible by ISVs, such as Provance.


Service manager looks like a great product and really ties together the operations side of IT (SCCM and SCOM) with the user facing helpdesk. Automated ticketing, easy installation, and extensibility make it look like a great product. The architecture is very modular and can provide high scalability and reliability via clustering and hardware load balancers.

WSU301: Administrator best practices

This session focused on a ton of tips and tricks for Windows and AD administration. The speaker went 100 MPH and it would be impossible to cover everything. But he will be uploading his presentation slide deck, scripts, and other goodies on his website here. He also wrote a book on Windows administration that you can buy from Amazon here. Since he will be uploading his slide deck, I’ll just touch on a few highlights to whet your appetite.

One of his main topics was role based administration of Active Directory. This can be accomplished with careful planning and no third-party tools. His method for role based AD delegation was actually very similar to a method I used with a client a few years ago. Group naming standards are very critical as well, and an organization must really adhere to them.

For a group naming convention, a convention of using a prefix to define the purpose and a suffix to define the access level. For example, on a file share you could have a group called ACL_HR-Data_Full-Access. To find all groups that apply permissions to objects you can simply search for “ACL_”. Groups that control permissions on GPOs could be called GPO_, computer groups could be COMP_.

I also learned about a feature in Active Directory called “Notification based replication.” This feature, which has been around for years, allows you to override the 15 minute site-to-site replication interval and make it near real-time. The speaker has a customer that has a 37 second global AD convergence time. Yes, any AD changes are replicated globally within 37 seconds. If you have the bandwidth between sites, this can be a great feature. You can find instructions on how to enable this feature, on a link by link case, here.

The speaker also covered many MMC customizations to make your life easier, how you can disable the local administrator account but still use it in safe mode, and a ton of other tips and tricks. Highly recommend you check out his slide deck at the link above, after he posts them. I can only image what’s in his book. Also, check out the May edition of the Windows IT Pro magazine as he has a large article in there covering many of these features.

SIA306: Server 2008 R2 Active Directory Recycle Bin

This session, Active Directory Recycle Bin,¬†was presented by Mark Minasi, which is always a riot to listen to. In addition to really knowing his stuff, he’s probably in the the top two TechEd presenters for style. Guaranteed laughs!

Prior to Windows Server 2008 R2, when you delete an object it’s gets stripped of most of its attributes and is put in a special hidden OU called “Deleted Objects.” For example, if you delete a user then virtually every property except the SAM account name is removed. Password, title, office, name…all gone! If you restore the object then you need to re-populate the attributes. Yes you could do an authoritative restore on the object, but in large environments this can take a significant amount of time and requires taking one DC offline.

Starting with Windows Server 2008 R2, if your entire forest is in Windows Server 2008 R2 functional mode, there’s a new concept called the Active Directory Recycle bin. Unlike previous versions of the operating system, all attributes on deleted objects are preserved. Group membership,name, previous OU location, etc. are all retained. Nifty eh?

But the kicker is that this new feature is not enabled by default, and only objects deleted after you enable this feature can be restored. So as soon as your forest is in 2008 R2 functional mode, turn on this feature.

How does object deletion work in 2008 R2 FFL? For the first 180 days after the object is deleted it is put in the recycle bin and you can easily restore it. After 180 days its now placed in the deleted objects OU, tombstoned, and permanently deleted after another 180 days. So any deleted object is retained in AD for a total of 360 days.

Mark covered several methods to restore the objects, using PowerShell and Ldp. Given those methods are a bit tedious, there’s a GUI way to do it. If you download PowerGUI then download the Active Directory Recycle Bin Powerpack, you can now do several tasks from a friendly GUI:

– Restore a deleted object (original location)
– Restore a deleted object to a different location
– Permanently delete an objects
– Empty the recycle bin
– Enable the recycle bin

Happy undeleting!

UNC302: Exchange 2010 RBAC

This session focused on the all-new permissions model of Exchange 2010/SP1. Back in the days of Exchange 2003/2007, Microsoft had a very, very limited permission model that just consisted of a handful of rights. This caused problems in organizations where you want to granularly delegate permissions to specific groups such as helpdesk, server admins, or various Exchange admins. To address these concerns, Microsoft has introduced a full RBAC (role based access control) model to Exchange 2010.

Features of this model include:

– Align organizational structure with their Exchange role and responsibilities.

– Replaces the AD-centric model of previous Exchange versions.

– Provides a consistent authorization model whether you are using the EMC, ECP, or EMS.

– SP1 provides over 65 roles to chose from

– The model consists of the three following components:

1. Role – What you can do, such as change attributes on a DL or manage a server.
2. Scope – Where and on what type of objects you can act (users, groups, OU, database, etc.)
3. User – The group or users which are added to the various roles

– The “Exchange Trusted Subsystems” performs all actions be it in AD or on servers. Exchange 2010 acts a a proxy and only allows authorized users to perform authorized tasks on objects within their scope. No more futzing around with individual ACLs on objects in AD or servers.

– SP1 will enhance RBAC by providing a best practices analyzer to find gaps in your RBAC permissions (such as only allowing administration of a specific object by a group, but that group is empty).

– SP1 will also add enhancements to the Exchange Control Panel to more fully manage RBAC roles, members, and permissions.

– SP1 will facilitate the split Exchange/AD-Windows model so that you can strictly limit what administrators can do (such as having AD only admins and Exchange only admins).

All in all, the new RBAC model is a major change from previous Exchange versions. This aligns with the all-new RBAC model in OCS “14” as well. Organizations can now more granularly assign permissions, not over-delegate rights, and better audit access into these critical services.

UNC311: OCS "14" Architecture

This session was really stellar, and covered a lot of good technical details. The session was so information packed that I’ll just highlight a few of the major new features or enhancements in OCS 14.

– The Central Management Store is a completely new feature that schematizes the definition of the deployment topology. What the heck does that mean? It means the entire OCS deployment topology and key configuration details are stored in a XML file. This XML file is replicated to all topology nodes, including Edge servers. This helps by preventing misconfiguration, enables topology validation, and will provide a more reliable discovery method for your OCS services.

– The survival branch appliance (SBA) is a hardware appliance provided by third-party vendors such as HP, Dialogic, and others to provide dial-tone functionality to remote offices should the WAN connection to the datacenter fail. The SBA is designed to be drop-dead easy to configure and easily installed at a remote location.

– Previously Microsoft focused on huge OCS deployments and typically required several servers to support all of the roles since some could not co-exist with each other. In OCS 14 the topology has been simplified and more roles can co-exist with each other. This is good news for smaller deployments as it will require less hardware. On the flip side, OCS can now scale up to almost unlimited size, multi-datacenter deployments, and provide global voice/video/IM support.

– Microsoft will release rich planning and topology build tool to help you size servers, see what roles you need, and help you better plan for your OCS 14 deployment. Previous releases of OCS have had fairly poor planning tools available.

– Like Exchange 2010, the OCS 14 admin tools are built on PowerShell. So anything you can do in the GUI you can now script with PowerShell. This got a round of applause from the audience. As a side note, the OCS control panel (admin tools) is now Silverlight based and no longer uses the MMC. This results in a great boost to usability as the MMC interface was clunky and hard to find all of the settings.

– Full RBAC (role based access control) is now baked into the product. Out of the box there are 14 roles. You can now granularly delegate tasks to the helpdesk, server admins, and other entities within your organization while protecting security.

– Monitoring of OCS has been greatly enhanced. Even with SCOM, previous versions of OCS had very limited monitoring capability. OCS 14 fully supports synthetic transactions, and a much richer SCOM management pack. For example, you could have SCOM run synthetic tests such as IMing between sites, placing voice calls and monitoring call quality, all in an automated fashion and get alerts about any problems. If you scheduled these tests nightly at say 4AM, when you come into work you could be alerted to any potential problems before users complain. You can now proactively monitor OCS.

– All roles, including media processing, are now supported on Hyper-V R2 and VMware ESX. Microsoft made many changes to OCS to support a virtual environment and better process real time data without packet loss. At this time no live migration with Hyper-V R2 is supported, as connections will be dropped. He didn’t mention if VMware vMotion suffers from the same issues or not.

OCS 14 supports very robust DNS load balancing. While this does not eliminate the need for a hardware load balancer (HLB), it greatly reduces the dependency. Apparently HBLs caused many support calls, so all OCS components use DNS load balancing in a smart way, even the client.

– PIN based authentication can be used for devices where a keyboard is not available. This is also good for a new hire onboarding process as they can immediately start using their hard phone by simply providing them a PIN. They don’t even need to know their SIP address.


In summary, OCS “14” has undergone major architectural changes and enhancements. Many pain points of OCS 2007 R2 have been addressed. Microsoft is now positioning OCS 14 as a full fledged voice communications server, including E-911 support. If you so desire, you can kick vendors such as Cisco and Avaya to the curb and rely totally on OCS for an integrated solution.