The Evolution of AD RMS to Azure Information Protection – Part 5 – Client-Side Migration and Testing

The Evolution of AD RMS to Azure Information Protection – Part 5 – Client-Side Migration and Testing

Welcome to the fifth entry in my series on the evolution of Microsoft’s Active Directory Rights Management Service (AD RMS) to Azure Information Protection (AIP).  We’ve covered a lot of material over this series.  It started with an overview of the service, examined the different architectures, went over key planning decisions for the migration from AD RMS to AIP, and left off with performing the server-side migration steps.  In this post we’re going to round out the migration process by performing a staged migration of our client machines.

Before we jump into this post, I’d encourage you to refresh yourself with my lab setup and the users and groups I’ve created, and finally the choices I made in the server side migration steps.  For a quick reference, here is the down and dirty:

  • Windows Server 2016 Active Directory forest named GEEKINTHEWEEDS.COM with servers running Active Directory Domain Services (AD DS), Active Directory Domain Name System (AD DNS), Active Directory Certificate Services (AD CS), Active Directory Federation Services (AD FS), Active Directory Rights Management Services (AD RMS), Azure Active Directory Connect, and Microsoft SQL Server Express.
  • Forest is configured to synchronize to Azure AD using Azure AD Connect and uses federated authentication via AD FS
  • Users Jason Voorhies and Ash Williams will be using a Windows 10 client machine with Microsoft Office 2016 named GWCLIENT1
  • Users Theodore Logan and Michael Myers will be using a Windows 10 client machine with Microsoft Office 2016 named GWCLIENT2
  • Users Jason Voorhies and Theodore Logan are in the Information Technology Windows Active Directory (AD) group
  • Users Ash Williams and Michael Myers are in the Accounting Windows AD group
  • Onboarding controls have been configured for a Windows AD group named GIW AIP Users of which Jason Voorhies and Ash Williams are members

Prepare The Client Machine

To take advantage of the new features AIP brings to the table we’ll need to install the AIP client. I’ll be installing the AIP client on GWCLIENT1 and leaving the RMS client installed by Office 2016 on GWCLIENT2. Keep in mind the AIP client includes the RMS client (sometimes referred to as MSIPC) as well.

If you recall from my last post, I skipped a preparation step that Microsoft recommended for client machines. The step has you download a ZIP containing some batch scripts that are used for performing a staged migration of client machines and users. The preparation script Microsoft recommends running prior to any server-side configuration Prepare-Client.cmd.  In an enterprise environment it makes sense but for this very controlled lab environment it wasn’t needed prior to server-side configuration. It’s a simple script that modifies the client registry to force the RMS client on the machines to go to the on-premises AD RMS cluster even if they receive content that’s been protected using an AIP subscription. If you’re unfamiliar with the order that the MSIPC client discovers an AD RMS cluster I did an exhaustive series a few years back.  In short, hardcoding the information to the registry will prevent the client from reaching out to AIP and potentially causing issues.

As a reminder I’ll be running the script on GIWCLIENT1 and not on GIWCLIENT2.  After the ZIP file is downloaded and the script is unpackaged, it needs to be opened with a text editor and the OnPremRMSFQDN and CloudRMS variables need to be set to your on-premises AD RMS cluster and AIP tenant endpoint. Once the values are set, run the script.

5AIP1.png

Install the Azure Information Protection Client

Now that the preparation step is out of the way, let’s get the AIP client installed. The AIP client can be downloaded directly from Microsoft. After starting the installation you’ll first be prompted as to whether you want to send telemetry to Microsoft and use a demo policy.  I’ll be opting out of both (sorry Microsoft).

5AIP2.png

After a minute or two the installation will complete successfully.

5AIP3.png

At this point I log out of the administrator account and over to Jason Voorhies. Opening Windows Explorer and right-clicking a text file shows we now have the classify and protect option to protect and classify files outside of Microsoft Office.

5AIP4.png

Testing the Client Machine Behavior Prior to Client-Side Configuration

I thought it would be fun to see what the client machine’s behavior would be after the AIP Client was installed but I hadn’t finished Microsoft’s recommended client-side configuration steps. Recall that GIWCLIENT1 has been previously been bootstrapped for the on-premises AD RMS cluster so let’s reset the client after observing the current state of both machines.

Notice on GWICLIENT1 the DefaultServer and DefaultServerUrl in the HKCU\Software\Microsoft\Office\16.0\Common\DRM do not exist even though the client was previously bootstrapped for the on-premises AD RMS instance. On GIWCLIENT2, which has also been bootstrapped, has the entries defined.

5AIP5.png

I’m fairly certain AIP cleared these out when it tried to activate when I started up Microsoft Word prior to performing these steps.

Navigating to HKCU\Software\Classes\Local Settings\Software\Microsoft\MSIPC shows a few slight differences as well. On GIWCLIENT1 there are two additional entries, one for the discovery point for Azure RMS and one for JOG.LOCAL’s AD RMS cluster. The JOG.LOCAL entry exists on GIWCLIENT1 and not on the GIWCLIENT2 because of the baseline testing I did previously.

5AIP6.png

5AIP7.png

Let’s take a look at the location the RMS client stores its certificates which is %LOCALAPPDATA%\Microsoft\MSIPC.  On both machines we see the expected copy of the public-key CLC certificate, the machine certificate, RAC, and use licenses for documents that have been opened.  Notice that even though the AD RMS cluster is running in Cryptographic Mode 1, the machine still generates a 2048-bit key as well.

5AIP8.png

5AIP9

Now that the RMS Client is reset on GIWCLIENT1, let’s go ahead and see what happens the RMS client tries to do a fresh activation after having AIP installed but the client-side configuration not yet completed.

After opening Microsoft Word I select to create a new document. Notice that the labels displayed in the AIP bar include a custom label I had previously defined in the AIP blade.

5AIP10.png

I then go back to the File tab on the ribbon and attempt to use the classic way of protecting a document via the Restrict Access option.

5AIP11.png

After selecting the Connect to Rights Management Servers and get templates option the client successfully bootstraps back to the on-premises AD RMS cluster as can be seen from the certificates available to the client and that all necessary certificates were re-created in the MISPC directory.

5AIP12

5AIP13.png

That’s Microsoft Office, but what about the scenario where I attempt to use the AIP client add in for Windows Explorer?

To test this behavior I created a PDF file named testfile.pdf.  Right-clicking and selecting the Classify and protect option opens the AIP client to display the default set of labels as well as a new GIW Accounting Confidential label.

5AIP14.png

If I select that label and hit Apply I receive the error below.

5AIP15.png

The template can’t be found because the client is trying to pull it from my on-premises AD RMS cluster.  Since I haven’t run the scripts to prepare the client for AIP, the client can’t reach the AIP endpoints to find the template associated with the label.

The results of these test tell us two things:

  1. Installing the AIP Client on a client machine that already has Microsoft Office installed and configured for an on-premises AD RMS cluster won’t break the client’s integration with that on-premises cluster.
  2. The AIP client at some point authenticated to the Geek In The Weeds Azure AD tenant and pulled down the classification labels configured for my tenant.

In my next post I’ll be examining these findings more deeply by doing a deep dive of the client behavior using a combination of procmon, Fiddler, and WireShark to analyze the AIP Client behavior.

Performing Client-Side Configuration

Now that the client has been successfully installed we need to override the behavior that was put in place with the Prepare-Client batch file earlier.  If we wanted to redirect all clients across the organization that were using Office 2016, we could use the DNS SRV record option listed in the migration article.  This option indicates Microsoft has added some new behavior to the RMS Client installed with Office 2016 such that it will perform a DNS lookup of the SRV record to see a migration has occurred.

For the purposes of this lab I’ll be using the Microsoft batch scripts I referenced earlier.  To override the behavior we put in place earlier with the Prepare-Client.cmd batch script, we’ll need to run both the Migrate-Client and Migrate-User scripts.  I created a group policy object (GPO) that uses security filtering to apply only to GIWCLIENT1 to run the Migrate-Client script as a Startup script and a GPO that uses security filtering to apply only to GIW AIP Users group which runs the Migrate-User script as a Login script.  This ensures only GIWCLIENT2 and Jason Voorhies and Ash Williams are affected by the changes.

You may be asking what do the scripts do?  The goal of the two scripts are to ensure the client machines the users log into point the users to Azure RMS versus an on-premises AD RMS cluster.  The scripts do this by adding and modifying registry keys used by the RMS client prior to the client searching for a service connection point (SCP).  The users will be redirected to Azure RMS when protecting new files as well as consuming files that were previously protected by an on-premises AD RMS cluster.  This means you better had performed the necessary server-side migration I went over previously, or else your users are going to be unable to consume previously protected content.

We’ll dig more into AIP/Office 2016 RMS Client discovery process in the next post.

Preparing Azure Information Protection Policies

Prior to testing the whole package, I thought it would be fun to create some AIP policies. By default, Microsoft provides you with a default AIP policy called the Global Policy. It comes complete with a reasonably standard set of labels, with a few of the labels having sublabels that have protection in some circumstances. Due to the migration path I undertook as part of the demo, I had to enabled protection for All Employees sublabels of both the Confidential and Highly Confidential labels.

5AIP16.png

In addition to the global policy, I also created two scoped policies. One scoped policy applies to users within the GIW Accounting group and the other applies to users within the GIW Information Technology group. Each policy introduces another label and sublabel as seen in the screenshots below.

5AIP17.png

5AIP18.png

Both of the sublabels include protection restricting members of the relevant groups to the Viewer role only. We’ll see these policies in action in the next section.

Testing the Client

Preparation is done, server-side migration has been complete, and our test clients and users have now been completed the documented migration process. The migration scripts performed the RMS client reset so no need to repeat that process.

For the first test, let’s try applying protection to the testfile.txt file I created earlier. Selecting the Classify and protect option opens up the AIP Client and shows me the labels configured in my tenant that support classification and protection. Recall from the AIP Client limitations different file types have different limitations. You can’t exactly append any type of metadata to content of a text file now can you?

5AIP19.png

 

Selecting the IT Staff Only sublabel of the GIW IT Staff label and hitting the apply button successfully protects the text file and we see the icon and file type for the file changes.  Opening the file in Notepad now displays a notice the file is protected and the data contained in the original file has been encrypted.

5AIP20.png

We can also open the file with the AIP Viewer which will decrypt the document and display the content of the text file.

5AIP24

Next we test in Microsoft Word 2016 by creating a new document named AIP_GIW_ALLEMP and classifying it with the High Confidential All Employees sublabel.  The sublabel adds protection such that all users in the GIW Employees group have Viewer rights.

5AIP21

5AIP22

Opening the AIP_GIW_ALLEMP Word document that was protected by Jason Voorhies is successful and it shows Ash Williams has viewer rights for the file.

5AIP23.png

Last but not least, let’s open the a document we previously protected with AD RMS named GIW_GIWALL_ADRMS.DOCX.  We’re able to successfully open this file because we migrated the TPD used for AD RMS up to AIP.

5AIP25.png

At this point we’ve performed all necessary steps up the migration.  What you have left now is cleanup steps and planning for how you’ll complete the rollout to the rest of your user base.  Not bad right?

Over the next few posts ‘ll be doing a deep dive of the RMS Client behavior when interacting with Azure Information Protection.   We’ll do some procmon captures to the behavior of the client when it’s performing its discovery process as well as examining the web calls it makes to Fiddler.  I’ll also spend some time examining the AIP blade and my favorite feature of AIP, Tracking and Revocation.

See you next time!

 

The Evolution of AD RMS to Azure Information Protection – Part 3 – Planning The Migration

The Evolution of AD RMS to Azure Information Protection – Part 3 – Planning The Migration

Welcome to the third post in my series exploring the evolution of Active Directory Rights Management Service (AD RMS) into Azure Information Protection (AIP).  My first post provided an overview of the service and some of its usages and my second post covered how the architecture of the solution has changed as the service has shifted from traditional on-premises infrastructure to  a software-as-a-service (SaaS) offering).  Now that we understand the purpose of the service and its architecture, let’s explore what a migration will look like.

For the post I’ll be using the labs I discussed in my first post.  Specifically, I’ll be migrating lab 2 (the Windows Server 2016 lab) from using AD RMS to Azure Information Protection.  I’ve added an additional Windows 10 Professional machine to that lab for reasons I’ll discuss later in the post.  The two Windows 10 machines are named CLIENT1 and CLIENT2.  Microsoft has assembled some guidance which I’ll be referencing throughout this post and using as the map for the migration.

With the introduction done, let’s dig in.

Before we do any button pushing, there’s some planning necessary to ensure a successful migration.  The key areas of consideration are:

  • Impact to collaboration with trusted organizations
  • Tenant key storage
  • AIP Client Rollout
  • Integrated Rights Management (IRM) functionality of Microsoft Exchange Server or Microsoft SharePoint Server

Impact to collaboration with trusted organizations

Possibly most impactful to an organization is the planning that goes into how the migration will affect collaboration with partner organizations.  Back in the olden days of on-premises AD RMS, organizations would leverage the protection and control that came with AD RMS to collaborate with trusted partners.  This was accomplished through trusted user domains (TUDs) or federated trusts.  With AIP the concept of TUDs and additional infrastructure to support federated trusts is eliminated and instead replaced with the federation capabilities provided natively via Azure Active Directory.

Yes folks, this means that if you want the same level of collaboration you had with AD RMS using TUDs, both organizations will need to need to have an Azure Active Directory (Azure AD) tenant with a license that supports the Azure Rights Management Service (Azure RMS).    In a future post in the series, we’ll check out what happens when the partner organization doesn’t migrate to Azure AD and attempts to consume the protected content.

Tenant Key Storage

The tenant key can be thought to as the key to the kingdom in the AIP world.  For those of you familiar with AD RMS, the tenant key serves the same function as the cluster key. In the on-premises world of AD RMS the cluster key was either stored within the AD RMS database or on a hardware security module (HSM).

When performing a migration to the world of AIP, storage of the tenant key has a few options.  If you’re using a cluster key that was stored within the AD RMS database you can migrate the key using some simple PowerShell commands.  If you’re opted to use HSM storage for your cluster key, you’re going to be looking at the bring your own key (BYOK) scenario.  Finally, if you have a hard requirement to keep the key on premises you can explore the hold your own key option (HYOK).

For this series I’ve configured my labs with a cluster key that is stored within the AD RMS database (or software key as MS is referring to it).  The AD RMS cluster in my environment runs in cryptographic mode 1, so per MS’s recommendation I won’t be migrating to cryptographic mode 2 until after I migrate to AIP.

AIP Client Rollout

Using AIP requires the AIP Client be installed.  The AD RMS Client that comes with pre-packaged with Microsoft Office can protect but can’t take advantage of the labels and classification features of AIP.   You’ll need to consider this during your migration process and ensure any middleware that uses the AD RMS Client is compatible with the AIP Client.  The AIP Client is compatible with on-premises AD RMS so you don’t need to be concerned with backwards compatibility.

As I mentioned above, I have two Windows 10 clients named CLIENT1 and CLIENT2.  In the next post I’ll be migrating CLIENT2 to the AIP Client and keeping CLIENT1 on the AD RMS Client.  I’ll capture and break down the calls with Fiddler so we can see what the differences are.

Integrated Rights Management (IRM) functionality of Microsoft Exchange Server or Microsoft SharePoint Server

If you want to migrate to AIP but still have a ways to go before you can migrate Exchange and SharePoint to the SaaS counterparts, have no fear.  You can leverage the protection capabilities of AIP (aka Azure RMS component) by using the Microsoft Rights Management Service Connector.  The connector requires some light weight infrastructure to handle the communication between Exchange/SharePoint and AIP.

I probably won’t be demoing the RMS Connector in this series, so take a read through the documentation if you’re curious.

We’ve covered an overview of AIP, the different architectures of AD RMS and AIP, and now have covered key planning decisions for a migration.  In the next post in my series we’ll start getting our hands dirty by initiating the migration from AD RMS to AIP.  Once the migration is complete, I’ll be diving deep into the weeds and examining the behavior of the AD RMS and AIP clients via Fiddler captures and AD RMS client debugging (assuming it still works with the AIP client).

See you next post!

The Evolution of AD RMS to Azure Information Protection – Part 2 – Architecture

The Evolution of AD RMS to Azure Information Protection – Part 2 – Architecture

Hi there.  Welcome to the second post in my series exploring the evolution of Active Directory Rights Management Service (AD RMS) into Azure Information Protection (AIP).  In the first post of the series I gave an brief overview of the important role AIP plays in Microsoft’s Cloud App Security (CAS) offering.  I also covered the details of the lab I will be using for this series.  Take a few minutes to read the post to familiarize yourself with the lab details because it’ll be helpful as we progress through the series.

I went back and forth as to what topic I wanted to cover for the second post and decided it would be useful to start at the high level comparing the components in a typical Windows AD RMS implementation to those used when consuming AIP.  I’m going to keep the explanation of each component brief to avoid re-creating existing documentation, but I will provide links to official Microsoft document for each component I mention.  With that intro, let’s begin.

The infrastructure required in an AD RMS implementation is pretty minimal but the complexity is in how all of the components work together to provide the solution.  At a very high level it is similar to any other web-based application consisting of a web server, application code, and a data backend.   The web-based application integrated with a directory to authenticate users and get information about the user that is used in authorization decisions.  In the AD RMS world the components map to the following products:

  • Web Server – Machine running Windows Server with Microsoft Internet Information Services and Microsoft Message Queuing Service
  • Application Code – Code installed onto the machine after adding the AD RMS role to a machine running Windows Server
  • Data Backend – Machine running Windows Server with Microsoft SQL Server running on it hosting configuration and logging database (optionally Windows Internal Database (WID) for test environments)
  • Directory – Windows Active Directory provides authentication, user information used for authorization, and stores additional AD RMS configuration data (Service Connection Point)

Nodes providing the AD RMS service are organized into a logical container called an AD RMS Cluster. Like most web applications AD RMS can be scaled out by adding more nodes to the cluster to improve performance and provide high availability (HA).  If using MS SQL for the data backend, traditional methods of HA can be used such as SQL clustering, database mirroring, and log shipping.  You can plop your favorite load balancer in front of the solution to help distribute the application load and keep track of the health of the nodes providing the service.

Beyond the standard web-based application components we have some that are specific to AD RMS.  Let’s take a deeper look at them.

  • AD RMS Cluster Key

    The AD RMS cluster key is the most critical part of an AD RMS implementation, the “key to the kingdom”, as it is used to sign the Server Licensor Certificate (SLC) which contains the corresponding public key.  The SLC is used to sign certificates created by AD RMS that allow for consumption of AD RMS-protected content as well as being used by AD RMS clients to encrypt the content key when a document is newly protected by AD RMS.

    The AD RMS cluster key is shared by all nodes that are members of the AD RMS cluster.  It can be stored within the MS SQL database/WID or on a supported hardware security module for improved security.

  • AD RMS Client / AD RMS-Integrated Server Applications

    Applications are great, but you need a method to consume them.  Once content is protected by AD RMS it can only be consumed by an application capable of communicating with AD RMS.  In most cases this is accomplished by using an application that has been written to use the AD RMS Client.  The AD RMS Client comes pre-installed on Windows Vista and up desktop operating systems and Windows Server 2008 and up server operating systems.

    The AD RMS client performs tasks such as bootstrapping (sometimes referred to as activation).  I won’t go into the details because I wouldn’t do near as well job as Dan does in the bootstrapping link.  In short it generates some keys and obtains some certificates from the AD RMS service that facilitate protecting and consuming content.

    AD RMS-integrated server applications such as Microsoft SharePoint Server and Microsoft Exchange Server provide server-level services that leverage the capabilities provided by AD RMS to protect data such as files stored in a SharePoint library or emails sent through Microsoft Exchange.

  • AD RMS Policy Templates

    While not a component of the system architecture, AD RMS Policy Templates are an AD RMS concept that deserves mention in this discussion.  The templates can be created by an organization to provide a standard set of use rights applicable to a type of data.  Common use cases are having multiple templates created for different data types.  For example, you may want one data type that allows trusted partners to view the document but not print or forward it while another template may restrict view rights to the accounting department.

    In AD RMS the policies are stored in the AD RMS database but are accessible via a call to the web service.  Optionally they can be exported from the database and distributed in other means like a Windows file share.

As you can see there are a lot of moving parts to an on-premises Windows AD RMS implementation.  Some of the components mentioned above can get even more complicated when the need to collaborate across organizations or support mobile devices arises.

How does AIP compare?  For the purposes of this post, I’m going to focus that comparison on Azure RMS which provides the protection capability of AIP.  Azure RMS is a software-as-a-service (SaaS) offering from Microsoft replaces (yes Microsoft, let’s be honest here) AD RMS.  It is licensed on a per user basis via a stand-alone, Enterprise Mobility + Security P1/P2, or qualifying Office 365 license.

The architecture of Azure RMS is far more simple than what existed for AD RMS.  Like most SaaS services, there is no on-premises infrastructure required except in very specific scenarios such as hold-your-own-key (HYOK) or integrating Azure RMS with an on-premises Microsoft Exchange Server, Microsoft SharePoint Server, or servers running Windows Server and File Classification Infrastructure (FCI) using the RMS Connector.   This means you won’t be building any servers to hold the RMS role or SQL Servers to host configuration and logging information.  The infrastructure is now managed by Microsoft and the RMS service provided over HTTP/HTTPS.

Azure RMS shifts its directory dependency to Azure Active Directory (AAD).  It uses the tenant in which the Azure RMS licenses are associated with for authentication and authorization of users.  As with any AAD use case, you can still authenticate users against your on-premises Windows Active Directory if you’ve configured your tenant for federated authentication and source data from an on-premises directory using Azure Active Directory Connect.

The cluster key, client, integrated applications, and policies are still in place and work similar to on-premises AD RMS with some changes to both function and names.

  • Azure Information Protection Tenant Key

    The AD RMS Cluster key has been renamed to the Azure Information Protection tenant key.  The tenant key serves the same purpose as the AD RMS Cluster and is used to sign the SLC certificate and decrypt information sent to Azure RMS using the public key in the SLC.  The differences between the two are really around how the key is generated and stored.  By default the tenant key is generated (note that Microsoft generates a 2048-bit key instead of a 1024-bit like was done with new installations of AD RMS) by Microsoft and is associated with your Azure Active Directory tenant.  Other options include bring-your-own-key (BYOK), HYOK, and a special instance where you are migrating from AD RMS to Azure RMS.  I’ll cover HYOK and the migration instance in future posts.

  • Azure Information Protection Client

    The AD RMS client is replaced with the Azure Information Protection Client.  The client performs the same functions as the AD RMS Client but allowing for integration with either on-premises AD RMS or Azure RMS.  In addition, the client introduces functionality around Azure Information Protection including adding a classification bar for Microsoft Office, Do Not Forward button to Microsoft Outlook, option in Windows File Explorer to classify and protect files, and PowerShell modules that can be used to bulk classify and protect files.  In a future post in this series I’ll be doing a deep dive of the client behavior including analysis of its calls to the Azure Information Protection endpoints via Fiddler.

    Unlike the AD RMS client of the past, the Azure Information Protection Client is supported on mobile operating systems such as iOS and Android.  Additionally, it supports a wider variety of file types than the AD RMS client supported.

  • Azure RMS-Integrated Server Applications

    Like its predecessor Azure RMS can be consumed by server applications such as Microsoft Exchange Server and Microsoft SharePoint Server with the RMS Connector.  There is native integration with Office 365 products including Exchange Online, SharePoint Online, OneDrive for Business, as well as being extensible to third-party applications via Cloud App Security (I’ll demonstrate this after I complete this series).  Like all good SaaS, there is also an API that can be leveraged to add the functionality to custom developed applications.

  • Rights Management Templates

    Azure RMS continues to use concepts of rights management templates like its predecessor.  Instead of being stored in a SQL database, the templates are stored in Microsoft’s cloud.  Templates created in AD RMS can also be imported into Azure RMS for continued use.  I’ll demonstrate how that process in a future post in this series.  Classification labels in AIP are backed by templates whenever a label applies protection with a pre-defined set of rights.  I’ll demonstrate this in a later post.

Far more simple in the SaaS world isn’t it?  In addition to simplicity Microsoft delivers more capabilities, tighter integration with its collaboration tools, and expansion of the capabilities to third party applies through a robust API and integration with Cloud App Security.

See you next post!

Deep dive into AD FS and MS WAP – User Certificate Authentication through a WAP

Hi everyone,

Today I continue my series of posts that cover a behind the scenes look at how Active Directory Federation Service (AD FS) and the Microsoft Web Application Proxy (WAP) interact.  In my first post  I explained the business cases that would call for the usage of a WAP.  In my second post I did a deep dive into the WAP registration process (MS refers to this as the trust establishment with AD FS and the WAP).  In this post I decided to cover how user certificate authentication is achieved when AD FS server is placed behind the WAP.

AD FS offers a few different options to authenticate users to the service including Integrated Windows Authentication (IWA), forms-based authentication, and certificate authentication.  Readers who work in environments with sensitive data where assurance of a user’s identity is important should be familiar with certificate authentication in the Microsoft world.  If you’re unfamiliar with it I recommend you take a read through this Microsoft article.

With the recent release of the National Institute of Standards and Technology (NIST) Digital Identity Guidelines 800-63 which reworks the authenticator assurance levels (AAL) and relegates passwords to AAL1 only, organizations will be looking for other authenticator options.  Given the maturity of authenticators that make use of certificates such as the traditional smart card it’s likely many organizations will look at opportunities for how the existing equipment and infrastructure can be further utilized.  So all the more important we understand how AD FS certificate authentication works.

I’ll be using the lab I described in my first post.  I made the following modifications/additions to the lab:

  • Configure Active Directory Certificate Services (AD CS) certificate authority (CA) to include certificate revocation list (CRL) distribution point (CDP).  The CRLs will be served up via an IIS instance with the address crl.journeyofthegeek.com.  This is the only CDP listed in the certificates.  Certificates created during my original lab setup that are installed within the infrastructure do not include a CDP.
  • Added a non-domain-joined Windows 10 computer which be used as the endpoint the test user accesses the federation service from.

Tool-wise I used ProcMon, Fiddler, API Monitor, and WireShark.

So what did I discover?

Prior to doing any type of user interaction, I setup the tools I would be using moving forward.  On the WAP I started ProcMon as an Administrator and configured my filters to capture only TCP Send and TCP Receive operations.  I also setup WireShark using a filter of ip.addr==192.168.100.10 && tcp.port==80.  The IP address is the IP of the web server hosting my CRLs.  This would ensure I’d see the name of the process making the connection to the CDP as well as the conversation between the two nodes.

pic1

** Note that the machine will cache the CRLs after they are successfully downloaded from the CDP.  It will not make any further calls until the CRLs expire.  To get around this behavior while I was testing I ran the command certutil -setreg chain\ChainCacheResyncFiletime @now as outlined in this article.   This forces the machine to pull the CRLs again from the CDP regardless of whether or not they are expired.  I ran the command as the LOCAL SYSTEM security principal using psexec.

The final step was to start Fiddler as the NETWORK SERVICE security principal using the command psexec -i -u “NT AUTHORITY\Network Service” “C:\Program Files (x86)\Fiddler2\Fiddler.exe”.  Remember that Fiddler needs the public key certificate in the appropriate file location as I outlined in my last post.  Recall that the Web Application Proxy Service and the Active Directory Federation Service running on the WAP both run as that security principal.

Once all the tools were in place I logged into the non-domain joined Windows 10 box and opened up Microsoft Edge and popped the username of my test user into the username field.

pic2.png

After home realm discovery occurred within Azure AD, I received the forms-based login page of my AD FS instance.

 

pic3.png

Let’s take a look at what’s happened on the WAP so far.

In the initial HTTP Connect session the WAP makes to the AD FS farm, we see that the ClientHello handshake occurs where the WAP authenticates to the AD FS server to authenticate itself as described in my last post.

pic4.png

Once the secure session is established the WAP passes the HTTP GET request to the AD FS server.  It adds a number of headers to the request which AD FS consumes to identify the client is coming from the WAP.  This information is used for a number of AD FS features such as enforcing additional authentication policies for Extranet access.

pic5.png

The WAP also passes a number of query strings.  There are a few interesting query strings here.  The first is the client-request-id which is a unique identifier for the session that AD FS uses to correlate event log errors with the session.  The username is obvious and shows the user’s user principal name that was inputted in the username field at the O365 login page.  The wa query string shows a value of wsignin1.0 indicating the usage of WS-Federation.  The wtrealm indicates the relying party identifier of the application, in this case Azure AD.

pic6

The wctx query string is quite interesting and needs to be parsed a bit on its own.  Breaking down the value in the parameter we come across three unique parameters.

LoginOptions=3 indicates that the user has not selected the “Keep me signed in” option.  If the user had selected that checkbox a value of 1 would have been passed and AD FS would create a persistent cookie which would exist even after the browser closes.  This option is sometimes preferable for customers when opening documents from SharePoint Online so the user does not have to authenticate over and over.

The estsredirect contains the encoded and signed authentication request from O365.  I stared at API monitor for a few hours going API call by API call trying to identify what this looks like once it’s decoded, but was unsuccessful.  If you know how to decode it, I’d love to know.  I’m very curious as to its contents.

The WAP next makes another HTTP GET to the AD FS server this time including the additional query string of pullStatus which is set equal to 0.  I’m clueless as to the function on of this, I couldn’t find anything.  The only other thing that changes is the referer.

My best guess on the above two sessions is the first session is where AD FS performs home realm discovery and maybe some processing on to determine if there are any special configurations for the WAP such as limited or expanded authentication options (device authN, certAuthN only).  The second session is simply the AD FS server presenting the authentication methods configured for Extranet users.

The user then chooses the “Sign in with an X.509 certificate” (I’m not using SNI to host both forms and cert authN on the same port) and the WAP then performs another HTTP CONNECT to port 49443 which is the certificate authentication endpoint on the AD FS server.  It again authenticates to the AD FS server with its client certificate prior to establishing the secure tunnel.

The third session we see a HTTP POST to the AD FS server with the same query parameters as our previous request but also providing a JSON object with a key of AuthMethod and the key value combination of AuthMethod=CertificateAuthentication in the body.

pic7

The next session is another HTTP POST with the same JSON object content and the key value pairs of AuthMethod=CertificateAuthentication and RetrieveCertificate=1 in the body.  The AD FS server sends a 307 Temporary Redirect to the /adfs/backendproxytls/ endpoint on the AD FS server.

Prior to the redirect completing successful we see the calls to the CDP endpoint for the full and delta CRLs.

pic8.png

pic9

I was curious as to which process was pulling the CRLs and identified it was LSASS.EXE from the ProcMon capture.

pic10

At the /adfs/backendproxytls/ endpoint the WAP performs another HTTP POST this time posting a JSON object with a number of key value combinations.

pic11.png

The interesting key value types included in the JSON object are the nested JSON object for Headers which contains all the WAP headers I covered earlier.  The query string JSON object which contains all the query strings I covered earlier.  The SeralizedClientCertificate contains the certificate the user provided after selecting to use certificate authentication.  The AD FS server then sends back a cookie to the WAP.  This cookie is the cookie the representing the user’s authentication to the AD FS server as detailed in this link.

pic12.png

The WAP then performs a final HTTP GET back at the /adfs/ls/ endpoint including the previously described headers and query strings as well as provided the cookie it just received.  The AD FS server responds by providing the assertion requested by Microsoft along with a MSISAuthenticated, MSISSignOut, and MSISLoopDetectionCookie cookies which are described in the link above.

What did we learn?

  1. The certificate is checked at both the WAP and the AD FS server to ensure it is valid and issued from a trusted certificate authority.  Remember to verify you trust the certificate chain of any user certificates on both the AD FS servers and WAPs.
  2. CRL Revocation checking is enabled by default and is performed on both the AD FS server and the WAP.  Remember to verify the locations in your CDP are available by both devices.
  3. The AD FS servers use the LSALogonUser function in the secur32.dll library to perform standard certificate authentication to Active Directory Domain Services.  I didn’t include this, but I captured this by running API monitor on the AD FS server.

In short, if you’re going to use device authentication or user certificate authentication make sure you have your PKI components in order.

See you next post!

Active Directory Federation Services – SQL Attribute Store

Active Directory Federation Services – SQL Attribute Store

Hi everyone,

I recently had a use case come across my desk where I needed to do a SAML integration with a SaaS provider.  The provider required a number of pieces of information about the user beyond the standard unique identifier.  The additional information would be used to direct the user to the appropriate instance of the SaaS application.

In the past fifty or so SAML integrations I’ve done, I’ve been able to source my data directly from the Active Directory store.  This was because Active Directory was authoritative for the data or there was a reliable data synchronization process in place such that the data was being sourced from an authoritative source.  In this scenario, neither options was available.  Thankfully the data source I needed to hit to get the missing data exposed a subset of its data through a Microsoft SQL view.

I have done a lot in AD FS over the past few years from design to operational support of the service, but I had never sourced information from a data source hosted via MS SQL Server.  I reviewed the Microsoft documentation available via TechNet and found it to be lacking.  Further searches across MS blogs and third-party blogs provided a number of “bits” of information but no real end to end guide.  Given the lack of solid content, I decided it would be fun to put one together so off to Azure I went.

For the lab environment, I built the following:

  • Active Director forest name – geekintheweeds.com
  • Server 1 – SERVERDC (Windows Server 2016)
    • Active Directory Domain Services
    • Active Directory Domain Naming Services
    • Active Directory Certificate Services
  • Server 2 – SERVER-ADFS (Windows Server 2016)
    • Active Directory Federation Services
    • Microsoft SQL Server Express 2016
  • Server 3 – SERVER-WEB (Windows Server 2016)
    • Microsoft IIS

On SERVER-WEB I installed the sample claims application referenced here.  Make sure to follow the instructions in the blog to save yourself some headaches.  There are plenty of blogs out there that discuss building a lab consisting the of the services outlined above, so I won’t cover those details.

On SERVER-ADFS I created a database named hrdb within the same instance as the AD FS databases.  Within the database I created a table named dbo.EmployeeInfo with 5 columns named givenName, surName, email, userName, and role all of data type nvchar(MAX).  The userName column contained the unique values I used to relate a user object in Active Directory back to a record in the SQL database.

Screen Shot 2017-05-28 at 9.18.37 PM

Once the database was created and populated with some sample data and the appropriate Active Directory user objects were created, it was time to begin to configure the connectivity between AD FS and MS SQL.  Before we go creating the new attribute store, the AD FS service account needs appropriate permissions to access the SQL database.  I went the easy route and gave the service account the db_datareader role on the database, although the CONNECT and SELECT permissions would have probably been sufficient.

Screen Shot 2017-05-28 at 9.23.49 PM

After the service account was given appropriate permissions the next step was to configure it as an attribute store in AD FS.  To that I opened the AD FS management console, expanded the service node, and right-clicked on the Attribute Store and selected the Add Attribute Store option.  I used mysql  as the store name and selected SQL option from the drop-down box.  My SQL was a bit rusty so the connection string took a few tries to get right.

Screen Shot 2017-05-28 at 9.28.35 PM

I then created a new claim description to hold the role information I was pulling from the SQL database.

Screen Shot 2017-05-28 at 9.33.12 PM.png

The last step in the process was to create some claim rules to pull data from the SQL database.  Pulling data from a MS SQL datastore requires the use of custom claim rules.  If you’re unfamiliar with the custom claim language, the following two links are two of the best I’ve found on the net:

The first claim rule I created was a rule to query Active Directory via LDAP for the SAM-Account-Name attribute.  This is the attribute I would be using to query the SQL database for the user’s unique record.

Screen Shot 2017-05-28 at 9.42.05 PM.png

Next up I had my first custom claim rule where I queried the SQL database for the value in the userName column for the value of the SAM-Account-Name I pulled from earlier step and I requested back the value in the email column of the record that was returned. Since I wanted to do some transforming of the information in a later step, I added the claim to incoming claim set.

Screen Shot 2017-05-28 at 9.42.39 PM

I then issued another query for the value in the role column.

Screen Shot 2017-05-28 at 9.48.14 PM

Finally, I performed some transforms to verify I was getting the appropriate data that I wanted.  I converted the email address claim type to the Common Name type and the custom claim definition role I referenced above to the out of the box role claim definition.  I then hit the endpoint for the sample claim app and… VICTORY!

Screen Shot 2017-05-28 at 9.52.29 PM

Simple right?  Well it would be if this information had been documented within a single link.  Either way, I had some good lessons learned that I will share with you now:

  • Do NOT copy and paste claim rules.  I chased a number of red herrings trying to figure out why my claim rule was being rejected.  More than likely the copy/paste added an invalid character I was unable to see.
  • Brush up on your MS SQL before you attempt this.  My SQL was super rusty and it caused me to go down a number of paths which wasted time.  Thankfully, my worker Jeff Lee was there to add some brain power and help work through the issues.

Before I sign off, I want to thank my coworker Jeff Lee for helping out on this one.  It was a great learning experience for both of us.

Thanks and have a wonderful Memorial Day!

A taste of the cloud with Symantec.cloud Email Security

Recently, I had a chance to demo three of Symantec’s cloud services: Symantec.cloud Email Security, Email Encryption, and Endpoint Security . Since there do not seem to be too many detailed reviews of Symantec’s cloud services floating around on the Net, I figured I relay my experiences with the service.

Today I will be talking about Symantec.cloud Email Security.

Symantec.cloud Email Security

Host based anti-malware is all well and good, but stopping malware from ever entering network is even better. That is where the Email Security portion of the Symantec’s cloud services comes in. The service aims to provide anti-spam, anti-virus, image control, and content control of email through a single cloud-based service managed using a web-based client portal.

Setup was fairly easy, it involved filing out some paperwork with information about the network and submitting it to Symantec to aid in the configuration of the client portal. After about two weeks, we were provided with an email with further instructions of how to configure the mail servers. The service requires directing the mail server to send all mail to Symantec through a TLS-encrypted connection, adjusting MX records to point to Symantec’s servers, and optionally locking down SMTP ports to allow traffic to and from only Symantec servers. Performing all three of these tasks has the added benefit of decreasing the attack surface of the network by locking down SMTP and providing additional confidentiality of email data as it flows between the company’s network and Symantec’s mail servers (I’ll talk more about this when I review the Encryption portion of the service). Symantec support replied quickly to any problems we ran into during the setup.

Once everything was in working order, we were provided access to the web-based portal. (click on images and select expand to full size)

The summary window provides graphical representations and statistics showing total incoming and outgoing email, emails containing viruses, spam, blocked images, and blocked content. It is pretty typical of the summary windows you encounter in any type of enterprise level security software, nothing too special. Although it is neat to watch the amount of spam rise and fall depending on the day of the week (it seems even spammers enjoy taking the weekend off.)

The anti-virus piece of the service utilizes Symantec’s vast database of virus signatures to detect and remove viruses from email. The added benefit of this is better detection of zero-day threats since you are not sitting waiting for the DATs to be released and pushed to your local gateway anti-malware solution. There is not much configuration to this portion of the service.

The anti-spam portion of the service uses typical anti-spam lists, as well as heuristics and a signature system. From our testing, the lists catch close to 50% of the spam using the lists and the heuristics and signature system each catching about 25%. We were really impressed with the effectiveness of the filter. Almost no spam made it through and we had a false positive only about 1 out of every 10,000 emails.

The configuration options are pretty typical of any anti-spam solution. Email can be sent to a quarantine hosted by Symantec, sent along through with an appended header, blocked, or sent to a bulk email address. The quarantine feature was pretty nice, as each user can be setup with access to his or her quarantined emails to restore any false positives. Another available option is to give a single user control over the quarantines of multiple users. The only downfall to this option is control of the quarantines cannot be shared among users. Hopefully that is a features Symantec will add in the future.

The content control feature of the service is really intense as can you can see from the screenshot below. I won’t go too in depth into it because I didn’t spend too much time playing with it. Suffice to say it is pretty awesome. Want to know if your users are sending personally identifiable information out over email without encrypting it? That can be done, even so far as scanning Microsoft Office and OCR’d PDFs. Suspect a certain users of sending company info out of the network without permission? Setup a rule to copy his or her email with specific attachment types to another email address for further review.

The system uses templates to detect patterns and the templates can be created by the user or with help by Symantec . We had Symantec help us create a template to detect bank account numbers and tested it by sending some Excel documents through the system with fictitious account numbers. The system caught the emails with the pattern and notified us of the email address used to send the email. These caught emails can be let through, tagged, logged, deleted, redirected to the administrator, or copied to the administrator. I can see this coming in handy when trying to prevent a data breach of confidential information or during an internal investigation of an employee’s email activities.

I didn’t play with the image control feature of the service. It looks to be as intense as the content control from the little that I looked at.

If you are in need of a customized report from the system, you can request Symantec to create one for you. I didn’t end up utilizing this feature, so I can’t say what the response time from support is.

Overall, I was really impressed with the service. The spam filter was amazing and the anti-virus seemed solid. I can think of a thousand usages for the content control and can’t wait to play with it further. Cost isn’t too bad, about $4,000 / year for 50 users. If you are looking to free up some server resources, consolidate software packages, and possibly increase your network security, Symantec’s cloud Email Security service is something to look at.