The Evolution of AD RMS to Azure Information Protection – Part 6 – Deep Dive into Client Bootstrapping

The Evolution of AD RMS to Azure Information Protection – Part 6 – Deep Dive into Client Bootstrapping

Today I’m back with more Azure Information Protection (AIP) goodness.  Over the past five posts I’ve covered the use cases, concepts and migration paths.  Today I’m going to get really nerdy and take a look behind the curtains at how the MSIPC client shipped with Office 2016 interacts with AIP .  I’ll be examining the MSIPC client log and reviewing procmon and Fiddler captures.  If the thought of examining log files and SOAP calls excites you, this is a post for you.  Make sure to take a read through my previous posts to ensure you understand my lab infrastructure and configuration as well as key AIP concepts.

Baselining the Client

Like any good engineer, I wanted to baseline my machine to ensure the MSIPC client was functioning correctly.  Recall that my clients are migrating from an on-premises AD RMS implementation to AIP.  I haven’t completed my removal of AD RMS so the service connection point for on-premises AD RMS is still there and the migration scripts Microsoft provides are still in use.  Let’s take a look at the registry entries that are set via the Migrate-Client and Migrate-User script.  In my last post I covered the purpose of the two scripts.  For the purposes of this post, I’m going to keep it brief and only cover registry entries applicable to the MSIPC client shipped with Office 2016.

  1. Migrate-Client
    • Condition: Runs each computer startup only if it detects it has not run before or the version variable in the script has been changed.
    • Registry Entries Modified:
      • Deletes HKLM\Software\Microsoft\MSIPC\ServiceLocation keys
      • Deletes HKLM\Software\Wow6432Node\Microsoft\MSIPC\ServiceLocation key
      • Deletes HKLM\Software\Microsoft\MSIPC\ServiceLocation\LicensingRedirection key
      • Deletes HKLM\Software\Wow6432Node\Microsoft\MSIPC\ServiceLocation\LicensingRedirection key
      • Add Default value to HKLM\Software\Microsoft\MSIPC\ServiceLocation\EnterpriseCertification key with data value pointing to AIP endpoint for tenant
      • Add Default value to HKLM\Software\Wow6432Node\Microsoft\MSIPC\ServiceLocation\EnterpriseCertification key with data value pointing to AIP endpoint for tenant
      • Add a value for the FQDN and single label URLs to on-premises AD RMS licensing pipeline to HKLM\Software\Microsoft\MSIPC\ServiceLocation\LicensingRedirection key with data values pointing to AIP endpoints for tenant
      • Add a value for the FQDN and single label URLs to on-premises AD RMS licensing pipeline to HKLM\Software\Wow6432NodeMicrosoft\MSIPC\ServiceLocation\LicensingRedirection key with data values pointing to AIP endpoints for tenant
  2. Migrate-User
    • Condition: Runs each user logon only if it detects it has not run before or the version variable in the script has been changed.
    • Registry Entries Modified:
      • Deletes HKCU\Software\Microsoft\Office\16.0\Common\DRM key
      • Deletes HKCU\Software\Classes\Local Settings\Software\Microsoft\MSIPC key
      • Deletes HKCU\Software\Classes\Microsoft.IPViewerChildMenu\shell key
      • Add DefaultServerUrl value to HKCU\Software\Microsoft\Office\16.0\Common\DRM key and set its data value to the AIP endpoint for the tenant
    • Files Modified:
      • Deletes the contents of the %localappdata%\Microsoft\MSIPC folder

A quick review of my client settings validates that all the necessary registry entries are in place and I have no issues consuming and created protected content.

Resetting the Client

If you have administered AD RMS in the past, you will be very familiar with how to re-bootstrap an RMS client.  Microsoft has made that entire process easier by incorporating a “reset” function into the AIP client.  The function can be accessed in Microsoft Office by hitting the drop down arrow for the AIP icon on the toolbar and selecting the Help and Feedback option.

6AIP1.png

After clicking the Help and Feedback option, a new window pops up where you can select the Reset Settings option to which performs a series of changes to the registry, deletions of RMS licenses, and AIP metadata.  Lastly, I log out of the machine.

6AIP2.png

 

Bootstrapping the Client with Azure Information Protection

After logging back in I start up Fiddler, open Microsoft Word, and attempt to open a file that was protected with my AD RMS cluster. The file opens successfully.

One thing to note is if you’re using Windows 10 and Microsoft Edge like I was, you’ll need to take the extra steps outlined here to successfully capture due to the AppContainer Isolation feature added back in Windows 8. If you do not take those extra steps, you’ll get really odd behavior. Microsoft Edge will fail any calls to intranet endpoints (such as AD FS in my case) by saying it can’t contact the proxy. Trying with Internet Explorer will simply cause Fiddler to fail to make the calls and to throw a DNS error. Suffice to say, I spent about 20 minutes troubleshooting the issue before I remembered Fiddler’s dialog box that pops up every new install about AppContainer and Microsoft Edge.

The first thing we’re going to look at is the MSIPC log files which keep track of the client activity. I have to give an applause to whichever engineer over at Microsoft thought it would be helpful to include such a detailed log. If you’ve administered on-premises AD RMS in the past on previous versions of Microsoft Office, you’ll know the joys (pain?) of client side tracing with DebugView.

When we pop open the log we get some great detail as to the client behavior. We can see the client read a number of registry entries. The first thing we see is the client discover that is not initialized so it calls an API to bootstrap the user. Notice in the below that it has identified my user and it’s mentioning OAuth as a method for authentication to the endpoint.

6AIP3.png

Following this we have a few more registry queries to discover the version of the operating system. We then have our first HTTP session opened by the client. I’m pretty sure this session is the initial user authentication to Azure AD in order to obtain a bearer access token for the user to call further APIs

6AIP4.png

Bouncing over to Fiddler we can check out the authentication process. We can see the machine reach out to Azure AD (login.windows.net), perform home realm discovery which Azure AD determines that geekintheweeds.com is configured for federated authentication. The client makes the connection to the AD FS server where the user is seamlessly authenticated via Kerberos. The windowstransport endpoint is called which supports the WS-Trust 1.3 active profile.  In an WS-Trust active flow, the client initiates the request (hence it’s active) vs the passive flow where the service provider initiates the flow.  This is how Office applications support modern (aka federated) authentication.

6AIP5

After the assertion is obtained, it’s posted to the /common/oauth2/token endpoint at login.windows.net.  The assertion is posted within a request for an access token, refresh token, and id token request using the saml1_1-bearer token grant type for the Azure RMS endpoint.

6AIP6.png

The machine is returned an access token, refresh token, and id token.  We can see the token returned is a bearer token allowing client to impersonate my user moving forward.

6AIP7.png

Dumping the access token into the Fiddler TextWizard and decoding the Base64 gives us the details of the token.  Within the token we can see an arm (authenticated method reference) of wia indicating the user authenticated using Windows authentication.  A variety of information about the user is included in the token including UPN, first name, and last name.

6AIP8.png

I’m fairly certain the tokens are cached to a flat file based upon some of the data I did via procmon while the bootstrap process initiated.  You can see the calls to create the file and write to it below.

6AIP9

After the tokens are obtained and cached we see from the log file that the MSIPC client then discovers it doesn’t have machine certificates.  It goes through the process of creating the machine certificates.

6AIP10.png

We now see the MISPC client attempts to query for the SRV record Microsoft introduced with Office 2016 to help with migrations from AD RMS.  The client then attempts discovery of service by querying the RMS-specific registry keys in the HKLM hive and comes across the information we hardcoded into the machine via the migration scripts.  It uses this information to make a request to the non-authenticated endpoint of https://<tenant_specific>/_wmcs/certification/server.asmx.

6AIP11

Bouncing back to Fiddler and continuing the conversation we can see a few different connections are created.  We see one to api.informationprotection.azure.com, another to mobile.pipe.aria.microsoft.com, and yet another to the AIP endpoint for my tenant.

6AIP12.png

I expected the conversation between api.informationprotection.azure.com and the AIP endpoint for my tenant.  The connection to mobile.pipe.aria.microsoft.com interested me.  I’m not sure if it was randomly captured or if it was part of the consumption of protected content.  I found a few Reddit posts where people were theorizing it has something to do with how Microsoft consumes telemetry from Microsoft Office.  As you could probably guess, this piqued my interest to know what exactly Microsoft was collecting.

We can see from the Fiddler captures that an application on the client machine is posting data to https://mobile.pipe.aria.microsoft.com/Collector/3.0/.  Examination of the request header shows the user agent as AriaSDK Client and the sdk-version of ACT-Windows Desktop.  This looks to be the method in which the telemetry agent for Office collects its information.

6AIP13.png

If we decode the data within Fiddler and dump both sets of data to Notepad we get some insight into what’s being pulled. Most of the data is pretty generic in that there is information about the version of Word I’m using, the operating system version, information that my machine is a virtual machine, and some activity IDs which must relate to something MS holds on their end. The only data point I found interesting was that my tenant ID is included in it. Given tenant id isn’t exactly a secret, it’s still interesting it’s being collected. It must be fascinating to see this telemetry at scale. Interesting stuff either way.

6AIP14.png

Continuing the conversation, let’s examine the chatter with my tenant’s AIP endpoint since the discovery was requested by the MSIPC client.  We see a SOAP request of GetServerInfo posted to https://<tenant_specific>/_wmcs/certification/server.asmx.  The response we receive from the endpoint has all the information our RMS client will need to process the request.  My deep dive into AD RMS was before I got my feet with Fiddler so I’ve never examined the conversations with the SOAP endpoints within AD RMS.  Future blog post maybe?  Either way, I’ve highlighted the interesting informational points below.  We can see that the service is identifying itself as RMS Online, has a set of features that cater to modern authentication, runs in Cryptomode 2, and supports a variety of authentication methods.  I’m unfamiliar with the authentication types beyond X509 and OAuth 2.  Maybe carry overs from on-prem?  Something to explore in the future.  The data boxed in red are all the key endpoints the RMS client needs to know to interact with the service moving forward.  Take note the request at this endpoint doesn’t require any authentication.  That comes in later requests.

6AIP15.png

After the response is received the MSIPC writes a whole bunch of registry entries to the HKCU hive for the user to cache all the AIP endpoint information it discovered.  It then performs a service discovery against the authenticated endpoint using its bearer token it cached to the tokencache file.

6AIP16.png

Once the information is written to the registry, the client initiates a method called GetCertAndLicURLsWithNewSD.  It uses the information it discovered previously to query the protected endpoint https://<tenant_specific>/_wmcs/oauth2/servicediscovery/servicediscovery.asmx.  Initially it receives a 401 unauthorized back with instructions to authenticate uses a bearer token.

6AIP17.png

The client tries again this time providing the bearer token it obtained earlier and placed in the tokencache file.  The SOAP action of ServiceDiscoveryForUser is performed and the client requests the specific endpoints for certification, licensing, and the new tracking portal feature of AIP.

6AIP18.png

The SOAP response contains the relevant service endpoints and user for which the query applied to.

6AIP19.png

The MSIPC client then makes a call to /_wmcs/oauth2/certification/server.asmx with a SOAP request of GetLicensorCertificate.  I won’t break that one down response but it returns the SLC certificate chain in XrML format.  For my tenant this included both the new SLC I generated when I migrated to AIP as well as the SLC from my on-premises AD RMS cluster that I uploaded.

6AIP20.png

The MISPC log now shows a method called GetNewRACandCLC being called which is used to obtain a RAC and CLC. This is done by making a call to the certification pipeline.

6AIP21.png

The call to /_wmcs/oauth2/certification/certification.asmx does exactly as you would expect and calls the SOAP request of Certify. This included my user’s RAC, and both SLCs and certificates in that chain. The one interesting piece in the response was a Quota tag as seen below. I received back five certificates, so maybe there is a maximum that can be returned? If you have more than 4 on-premises AD RMS clusters you’re consolidating to AIP, you might be in trouble. 🙂

6AIP22.png

The MISPC log captures the successful certification and logs information about the RAC.

6AIP23.png

Next up the client attempts to obtain a CLC by calling continuing with the GetNewRACandCLC method. It first calls the /_wmcs/licensing/server.asmx pipeline and makes a GetServerInfo SOAP request which returns the same information we saw in the last request to server.asmx. This request isn’t authenticated and the information returned is written to the HKCU hive for the user.

6AIP24.png

The service successfully returns the users CLC.  The last step in the process is the MSIPC service requests the RMS templates associated with the user.  You can see the template that is associated custom AIP classification label I created.

6AIP25.png

Last but not least, the certificates are written to the %LOCALAPPDATA%\Microsoft\MSIPC directory.

6AIP26.png

Conclusion

Very cool stuff right? I find it interesting in that the MSIPC client performs pretty much the same way it performs with on-premises exempting some of the additional capabilities introduced such as the search for the SRV DNS records and the ability to leverage modern authentication via the bearer token. The improved log is a welcome addition and again, stellar job to whatever engineer at Microsoft thought it would be helpful to include all the detail that is included in that log.

If you’ve used AD RMS or plan to use AIP and haven’t peeked behind the curtains I highly recommend it. Seeing how all the pieces fit together and how a relatively simple web service and a creative use of certificates can provide such a robust and powerful security capability will make your appreciate the service AD RMS tried to be and how far ahead of its time it was.

I know I didn’t cover the calls to the AIP-classification specific web calls, but I’ll explore that in my next entry.  Hopefully you enjoyed nerding out on this post as much as I did. Have a great week and see you next post!

The Evolution of AD RMS to Azure Information Protection – Part 5 – Client-Side Migration and Testing

The Evolution of AD RMS to Azure Information Protection – Part 5 – Client-Side Migration and Testing

Welcome to the fifth entry in my series on the evolution of Microsoft’s Active Directory Rights Management Service (AD RMS) to Azure Information Protection (AIP).  We’ve covered a lot of material over this series.  It started with an overview of the service, examined the different architectures, went over key planning decisions for the migration from AD RMS to AIP, and left off with performing the server-side migration steps.  In this post we’re going to round out the migration process by performing a staged migration of our client machines.

Before we jump into this post, I’d encourage you to refresh yourself with my lab setup and the users and groups I’ve created, and finally the choices I made in the server side migration steps.  For a quick reference, here is the down and dirty:

  • Windows Server 2016 Active Directory forest named GEEKINTHEWEEDS.COM with servers running Active Directory Domain Services (AD DS), Active Directory Domain Name System (AD DNS), Active Directory Certificate Services (AD CS), Active Directory Federation Services (AD FS), Active Directory Rights Management Services (AD RMS), Azure Active Directory Connect, and Microsoft SQL Server Express.
  • Forest is configured to synchronize to Azure AD using Azure AD Connect and uses federated authentication via AD FS
  • Users Jason Voorhies and Ash Williams will be using a Windows 10 client machine with Microsoft Office 2016 named GWCLIENT1
  • Users Theodore Logan and Michael Myers will be using a Windows 10 client machine with Microsoft Office 2016 named GWCLIENT2
  • Users Jason Voorhies and Theodore Logan are in the Information Technology Windows Active Directory (AD) group
  • Users Ash Williams and Michael Myers are in the Accounting Windows AD group
  • Onboarding controls have been configured for a Windows AD group named GIW AIP Users of which Jason Voorhies and Ash Williams are members

Prepare The Client Machine

To take advantage of the new features AIP brings to the table we’ll need to install the AIP client. I’ll be installing the AIP client on GWCLIENT1 and leaving the RMS client installed by Office 2016 on GWCLIENT2. Keep in mind the AIP client includes the RMS client (sometimes referred to as MSIPC) as well.

If you recall from my last post, I skipped a preparation step that Microsoft recommended for client machines. The step has you download a ZIP containing some batch scripts that are used for performing a staged migration of client machines and users. The preparation script Microsoft recommends running prior to any server-side configuration Prepare-Client.cmd.  In an enterprise environment it makes sense but for this very controlled lab environment it wasn’t needed prior to server-side configuration. It’s a simple script that modifies the client registry to force the RMS client on the machines to go to the on-premises AD RMS cluster even if they receive content that’s been protected using an AIP subscription. If you’re unfamiliar with the order that the MSIPC client discovers an AD RMS cluster I did an exhaustive series a few years back.  In short, hardcoding the information to the registry will prevent the client from reaching out to AIP and potentially causing issues.

As a reminder I’ll be running the script on GIWCLIENT1 and not on GIWCLIENT2.  After the ZIP file is downloaded and the script is unpackaged, it needs to be opened with a text editor and the OnPremRMSFQDN and CloudRMS variables need to be set to your on-premises AD RMS cluster and AIP tenant endpoint. Once the values are set, run the script.

5AIP1.png

Install the Azure Information Protection Client

Now that the preparation step is out of the way, let’s get the AIP client installed. The AIP client can be downloaded directly from Microsoft. After starting the installation you’ll first be prompted as to whether you want to send telemetry to Microsoft and use a demo policy.  I’ll be opting out of both (sorry Microsoft).

5AIP2.png

After a minute or two the installation will complete successfully.

5AIP3.png

At this point I log out of the administrator account and over to Jason Voorhies. Opening Windows Explorer and right-clicking a text file shows we now have the classify and protect option to protect and classify files outside of Microsoft Office.

5AIP4.png

Testing the Client Machine Behavior Prior to Client-Side Configuration

I thought it would be fun to see what the client machine’s behavior would be after the AIP Client was installed but I hadn’t finished Microsoft’s recommended client-side configuration steps. Recall that GIWCLIENT1 has been previously been bootstrapped for the on-premises AD RMS cluster so let’s reset the client after observing the current state of both machines.

Notice on GWICLIENT1 the DefaultServer and DefaultServerUrl in the HKCU\Software\Microsoft\Office\16.0\Common\DRM do not exist even though the client was previously bootstrapped for the on-premises AD RMS instance. On GIWCLIENT2, which has also been bootstrapped, has the entries defined.

5AIP5.png

I’m fairly certain AIP cleared these out when it tried to activate when I started up Microsoft Word prior to performing these steps.

Navigating to HKCU\Software\Classes\Local Settings\Software\Microsoft\MSIPC shows a few slight differences as well. On GIWCLIENT1 there are two additional entries, one for the discovery point for Azure RMS and one for JOG.LOCAL’s AD RMS cluster. The JOG.LOCAL entry exists on GIWCLIENT1 and not on the GIWCLIENT2 because of the baseline testing I did previously.

5AIP6.png

5AIP7.png

Let’s take a look at the location the RMS client stores its certificates which is %LOCALAPPDATA%\Microsoft\MSIPC.  On both machines we see the expected copy of the public-key CLC certificate, the machine certificate, RAC, and use licenses for documents that have been opened.  Notice that even though the AD RMS cluster is running in Cryptographic Mode 1, the machine still generates a 2048-bit key as well.

5AIP8.png

5AIP9

Now that the RMS Client is reset on GIWCLIENT1, let’s go ahead and see what happens the RMS client tries to do a fresh activation after having AIP installed but the client-side configuration not yet completed.

After opening Microsoft Word I select to create a new document. Notice that the labels displayed in the AIP bar include a custom label I had previously defined in the AIP blade.

5AIP10.png

I then go back to the File tab on the ribbon and attempt to use the classic way of protecting a document via the Restrict Access option.

5AIP11.png

After selecting the Connect to Rights Management Servers and get templates option the client successfully bootstraps back to the on-premises AD RMS cluster as can be seen from the certificates available to the client and that all necessary certificates were re-created in the MISPC directory.

5AIP12

5AIP13.png

That’s Microsoft Office, but what about the scenario where I attempt to use the AIP client add in for Windows Explorer?

To test this behavior I created a PDF file named testfile.pdf.  Right-clicking and selecting the Classify and protect option opens the AIP client to display the default set of labels as well as a new GIW Accounting Confidential label.

5AIP14.png

If I select that label and hit Apply I receive the error below.

5AIP15.png

The template can’t be found because the client is trying to pull it from my on-premises AD RMS cluster.  Since I haven’t run the scripts to prepare the client for AIP, the client can’t reach the AIP endpoints to find the template associated with the label.

The results of these test tell us two things:

  1. Installing the AIP Client on a client machine that already has Microsoft Office installed and configured for an on-premises AD RMS cluster won’t break the client’s integration with that on-premises cluster.
  2. The AIP client at some point authenticated to the Geek In The Weeds Azure AD tenant and pulled down the classification labels configured for my tenant.

In my next post I’ll be examining these findings more deeply by doing a deep dive of the client behavior using a combination of procmon, Fiddler, and WireShark to analyze the AIP Client behavior.

Performing Client-Side Configuration

Now that the client has been successfully installed we need to override the behavior that was put in place with the Prepare-Client batch file earlier.  If we wanted to redirect all clients across the organization that were using Office 2016, we could use the DNS SRV record option listed in the migration article.  This option indicates Microsoft has added some new behavior to the RMS Client installed with Office 2016 such that it will perform a DNS lookup of the SRV record to see a migration has occurred.

For the purposes of this lab I’ll be using the Microsoft batch scripts I referenced earlier.  To override the behavior we put in place earlier with the Prepare-Client.cmd batch script, we’ll need to run both the Migrate-Client and Migrate-User scripts.  I created a group policy object (GPO) that uses security filtering to apply only to GIWCLIENT1 to run the Migrate-Client script as a Startup script and a GPO that uses security filtering to apply only to GIW AIP Users group which runs the Migrate-User script as a Login script.  This ensures only GIWCLIENT2 and Jason Voorhies and Ash Williams are affected by the changes.

You may be asking what do the scripts do?  The goal of the two scripts are to ensure the client machines the users log into point the users to Azure RMS versus an on-premises AD RMS cluster.  The scripts do this by adding and modifying registry keys used by the RMS client prior to the client searching for a service connection point (SCP).  The users will be redirected to Azure RMS when protecting new files as well as consuming files that were previously protected by an on-premises AD RMS cluster.  This means you better had performed the necessary server-side migration I went over previously, or else your users are going to be unable to consume previously protected content.

We’ll dig more into AIP/Office 2016 RMS Client discovery process in the next post.

Preparing Azure Information Protection Policies

Prior to testing the whole package, I thought it would be fun to create some AIP policies. By default, Microsoft provides you with a default AIP policy called the Global Policy. It comes complete with a reasonably standard set of labels, with a few of the labels having sublabels that have protection in some circumstances. Due to the migration path I undertook as part of the demo, I had to enabled protection for All Employees sublabels of both the Confidential and Highly Confidential labels.

5AIP16.png

In addition to the global policy, I also created two scoped policies. One scoped policy applies to users within the GIW Accounting group and the other applies to users within the GIW Information Technology group. Each policy introduces another label and sublabel as seen in the screenshots below.

5AIP17.png

5AIP18.png

Both of the sublabels include protection restricting members of the relevant groups to the Viewer role only. We’ll see these policies in action in the next section.

Testing the Client

Preparation is done, server-side migration has been complete, and our test clients and users have now been completed the documented migration process. The migration scripts performed the RMS client reset so no need to repeat that process.

For the first test, let’s try applying protection to the testfile.txt file I created earlier. Selecting the Classify and protect option opens up the AIP Client and shows me the labels configured in my tenant that support classification and protection. Recall from the AIP Client limitations different file types have different limitations. You can’t exactly append any type of metadata to content of a text file now can you?

5AIP19.png

 

Selecting the IT Staff Only sublabel of the GIW IT Staff label and hitting the apply button successfully protects the text file and we see the icon and file type for the file changes.  Opening the file in Notepad now displays a notice the file is protected and the data contained in the original file has been encrypted.

5AIP20.png

We can also open the file with the AIP Viewer which will decrypt the document and display the content of the text file.

5AIP24

Next we test in Microsoft Word 2016 by creating a new document named AIP_GIW_ALLEMP and classifying it with the High Confidential All Employees sublabel.  The sublabel adds protection such that all users in the GIW Employees group have Viewer rights.

5AIP21

5AIP22

Opening the AIP_GIW_ALLEMP Word document that was protected by Jason Voorhies is successful and it shows Ash Williams has viewer rights for the file.

5AIP23.png

Last but not least, let’s open the a document we previously protected with AD RMS named GIW_GIWALL_ADRMS.DOCX.  We’re able to successfully open this file because we migrated the TPD used for AD RMS up to AIP.

5AIP25.png

At this point we’ve performed all necessary steps up the migration.  What you have left now is cleanup steps and planning for how you’ll complete the rollout to the rest of your user base.  Not bad right?

Over the next few posts ‘ll be doing a deep dive of the RMS Client behavior when interacting with Azure Information Protection.   We’ll do some procmon captures to the behavior of the client when it’s performing its discovery process as well as examining the web calls it makes to Fiddler.  I’ll also spend some time examining the AIP blade and my favorite feature of AIP, Tracking and Revocation.

See you next time!

 

Deep Dive into Azure AD Domain Services – Part 3

Deep Dive into Azure AD Domain Services  – Part 3

Well folks, it’s time to wrap up this series on Azure Active Directory Domain Services (AAD DS).  In my first post I covered the basic configurations of the managed domain and in my second post took a look at how well Microsoft did in applying security best practices and complying with NIST standards.  In this post I’m going to briefly cover the LDAPS over the Internet capability, summarize some key findings, and list out some improvements I’d like to see made to the service.

One of the odd features Microsoft provides with the AAD DS service is the ability to expose the managed domain over LDAPS to the Internet.  I really am lost as to the use case that drove the feature.  LDAP is very much a legacy on-premises protocol that has no place being exposed to risks of the public Internet.  It’s the last thing that should the industry should be encouraging.  Just because you can, doesn’t mean you should.   Now let me step off the soap box and let’s take a look at the feature.

As I covered in my last post LDAPS is not natively enabled in the managed domain.  The feature must be configured and enabled through the Azure Portal.  The configuration consists of uploading the private key and certificate the service will use in the form of a PKCS12 file (*.PFX).  The certificate has a few requirements that are outlined in the instructions above.  After the certificate is validated, it takes about 10-15 minutes for the service to become available.  Beyond enabling the service within the VNet, you additionally have the option to expose the LDAPS endpoint to the Internet.

3aads1.png

Microsoft provides instructions on how to restrict access to the endpoint to trusted IPs via a network security group (NSG) because yeah, exposing an LDAP endpoint to the Internet is just a tad risky.  To lock it down you simply associate an NSG with the subnet AAD DS is serving.  Once that is done enable the service via the option in the image above and wait about 10 minutes.  After the service is up, register a external DNS record for the service that points to the IP address noted under the properties section of the AADS blade and you’re good to go.

For my testing purposes, I locked the external LDAPS endpoint down to the public IP address my Azure VM was SNATed to.  After that I created an entry in the host file of the VM that matched the external DNS name I gave the service (whyldap.geekintheweeds.com) to the public IP address of the LDAPS endpoint in order to bypass the split-brain DNS challenge.  Initiating a connection from LDP.EXE was a success.

3aads2.png

Now that we know the service is running, let’s check out what the protocol support and cipher suite looks like.

3aads3.png

Again we see the use of deprecated cipher suites. Here the risk is that much greater since a small mistake with an NSG could expose this endpoint directly to the Internet.  If you’re going to use this feature, please just don’t.  If you’re really determined to, don’t screw up your NSGs.

This series was probably one of the more enjoyable series I’ve done since I knew very little about the AAD DS offering. There were a few key takeaways that are worth sharing:

  • The more objects in the directory, the more expensive the service.
  • Users and groups can be created directly in managed domain after a new organizational unit is created.
  • Password and lockout policy is insanely loose to the point where I can create an account with a three character password (just need to meet complexity requirements) and accounts never lockout.  The policy cannot be changed.
  • RC4 encryption ciphers are enabled and cannot be disabled.
  • NTLMv1 is enabled and cannot be disabled.
  • The service does not support smart-card enforced users.  Yes, that includes both the users synchronized from Azure AD as well as any users you create directly in the managed domain.  If I had to guess, it’s probably due to the fact that you’re not a Domain Admin so hence you can’t add to the NTAuth certificate store.
  • LDAPS is not enabled by default.
  • Schema extensions are not supported.
  • Account-Based Kerberos Delegation is not supported.
  • If you are syncing identities to Azure AD, you’ll also need to synchronize your passwords.
  • The managed domain is very much “out of the box” defaults.
  • Microsoft creates a “god” account which is a permanent member of every privileged group in the forest
  • Recovery of deleted objects created directly in the managed domain is not possible.  The rights have not been delegated to the AADC Administrator.
  • The service does not allow for Active Directory trusts
  • SIDHistory attribute of users and groups sourced from Azure AD is populated with Primary Group from on-premises domain

My verdict on AAD DS is it’s not a very useful service in its current state.  Beyond small organizations, organizations that have very little to no requirements on legacy infrastructure, organizations that don’t have strong security requirements, and dev/qa purposes I don’t see much of a use for it right now.  It comes off as a service in its infancy that has a lot of room to grow and mature.  Microsoft has gone a bit too far in the standardization/simplicity direction and needs to shift a bit in the opposite direction by allowing for more customization, especially in regards to security.

I’d really like to see Microsoft introduce the capabilities below.   All of them should  exposed via the resource blade in the Azure Portal if at all possible.  It would provide a singular administration point (which seems to be the strategy given the move of Azure AD and Intune to the Azure Portal) and would allow Microsoft to control how the options are enabled in the managed domain.  This means no more administrators blowing up their Active Directory forest because they accidentally shut off all the supported cipher suites for Kerberos.

  • Expose Domain Controller Event Logs to Azure Portal/Graph API and add support for AAD DS Power BI Dashboards
  • Support for Active Directory trusts
  • Out of the box provide a Red Forest model (get rid of that “god” account)
  • Option to disable risky cipher suites for both Kerberos and LDAPS
  • Option to harden the password and lockout policy
  • Option to disable NTLMv1
  • Option to turn on LDAP Debug Logging
  • Option to direct Domain Controller event logs to a SIEM
  • Option to restore deleted users and groups that were created directly in the managed domain.  If you’re allow creation, you need to allow for restoration.
  • Removal of Internet-accessible LDAPS endpoint feature or at least somehow incorporate the NSG lockdown feature directly into the AAD DS blade.

While the service has a lot of room for improvement the direction of a managed Windows AD offering is spot on.  In the year 2018, there is no reason Windows AD shouldn’t be offered as a managed service.  The direction Microsoft has gone by sourcing the identities and credentials from Azure AD is especially creative.  It’s a solid step in the direction of creating a singular centralized identity service that provides both legacy and modern protocols.  I’ll be watching this service closely as Microsoft builds upon it for the next few months.

Thanks and see you next post!

Integrating Azure AD and G-Suite – Google API Integration Part 1

Integrating Azure AD and G-Suite – Google API Integration Part 1

Hi everyone,

Welcome to the second post in my series on the integration between Azure Active Directory (Azure AD) and Google’s G-Suite (formally named Google Apps).  In my first entry I covered the single sign-on (SSO) integration between the two solutions.  This included a brief walkthrough of the configuration and an explanation of how the SAML protocol is used by both solutions to accomplish the SSO user experience.  I encourage you to read through that post before you jump into this.

So we have single sign-on between Azure AD and G-Suite, but do we still need to provision the users and group into G-Suite?  Thanks to Google’s Directory Application Programming Interface (API) and Azure Active Directory’s (Azure AD) integration with it, we can get automatic provisioning into G-Suite .  Before I cover how that integration works, let’s take a deeper look at Google’s Cloud Platform (GCP) and its API.

Like many of the modern APIs out there today, Google’s API is web-based and robust. It was built on Google’s JavaScript Object Notation (JSON)-based API infrastructure and uses Open Authorization 2.0 (OAuth 2.0) to allow for delegated access to an entities resources stored in Google. It’s nice to see vendors like Microsoft and Google leveraging standard protocols for interaction with the APIs unlike some vendors… *cough* Amazon *cough*. Google provides software development kits (SDKs) and shared libraries for a variety of languages.

Let’s take a look at the API Explorer.  The API Explorer is a great way to play around with the API without the need to write any code and to get an idea of the inputs and outputs of specific API calls.  I’m first going to do something very basic and retrieve a listing of users in my G-Suite directory.  Once I access the API explorer I hit the All Versions menu item and select the Admin Directory API.

google1int1

On the next screen I navigate down to the directory.users.list method and select it.  On the screen that follows I’m provided with a variety of input fields.  The data I input into these fields will affect what data is returned to me from the API. I put the domain name associated with my G-Suite subscription and hit the Authorize and Execute button.  A new window pops up which allows me to configure which scope of access I want to grant the API Explorer.  I’m going to give it just the scope of https://www.googleapis.com/auth/admin.directory.user.readonly.

google1int2

I then hit the Authorize and Execute button and I’m prompted to authenticate to Google and delegate API Explorer to access data I have permission to access in my G-Suite description.   Here I plug in the username and password for a standard user who isn’t assigned to any G-Suite Admin Roles.

google1int3

After successfully authenticating, I’m then prompted for consent to delegate API Explorer to view the users configured in the user’s G-Suite directory.

google1int4

I hit the allow button, the request for delegated access is complete, and a listing of users within my G-Suite directory are returned in JSON format.

google1int5

Easy right?  How about we step it up a notch and create a new user.  For that operation I’ll be delegating access to API Explorer using an account which has been granted the G-Suite User Management Admin role.  I navigate back to the main list of methods and choose the directory.users.insert method.  I then plug in the required values and hit the Authorize and Execute button.  The scopes menu pops up and I choose the https://www.googleapis.com/auth/admin.directory.user scope to allow for provisioning of the user and then hit the Authorize and Execute button.  The request is made and a successful response is returned.

google1int6

Navigating back to G-Suite and looking at the listing of users shows the new user Marge Simpson as appearing as created.

google1int7

Now that we’ve seen some simple samples using API Explorer let’s talk a bit about how you go about registering an application to interact with Google’s API as well as covering some basic Google Cloud Platform (GCP) concepts.

First thing I’m going to do is navigate to Google’s Getting Started page and create a new project.  So what is a project?  This took a bit of reading on my part because my prior experience with GCP was non-existent.  Think of a Google Project like an Amazon Web Services (AWS) account or a Microsoft Azure Subscription.  It acts as a container for billing, reporting, and organization of GCP resources.  Projects can be associated with a Google Cloud Organization (similar to how multiple Azure subscriptions can be associated with a single Microsoft Azure Active Directory (Azure AD) tenant) which is a resource available for a G-Suite subscription or Google Cloud Identity resource.  The picture below shows the organization associated with my G-Suite subscription.

google1int8

Now that we have the concepts out of the way, let’s get back to the demo. Back at the Getting Started page, I click the Create a new project button and authenticate as the super admin for my G-Suite subscription. I’ll explain why I’m using a super admin later. On the next screen I name the project JOG-NET-CONSOLE and hit the next button.

google1int9

The next screen prompts me to provide a name which will be displayed to the user when the user is prompted for consent in the instance I decide to use an OAuth flow which requires user consent.

google1int10

Next up I’m prompted to specify what type of application I’m integrated with Google.  For this demonstration I’ll be creating a simple console app, so I’m going to choose Web Browser simply to move forward.  I plug in a random unique value and click Create.

google1int11

After creation is successful, I’m prompted to download the client configuration and provided with my Client ID and Client Secret. The configuration file is in JSON format that provides information about the client’s registration and information on authorization server (Google’s) OAuth endpoints. This file can be consumed directly by the Google API libraries when obtaining credentials if you’re going that route.

google1int12

For the demo application I’m building I’ll be using the service account scenario often used for server-to-server interactions. This scenario leverages the OAuth 2.0 Client Credentials Authorization Grant flow. No user consent is required for this scenario because the intention of the service account scenario is it to access its own data. Google also provides the capability for the service account to be delegated the right to impersonate users within a G-Suite subscription. I’ll be using that capability for this demonstration.

Back to the demo…

Now that my application is registered, I now need to generate credentials I can use for the service account scenario. For that I navigate to the Google API Console. After successfully authenticating, I’ll be brought to the dashboard for the application project I created in the previous steps. On this page I’ll click the Credentials menu item.

google1int13

The credentials screen displays the client IDs associated with the JOG-NET-CONSOLE project.  Here we see the client ID I received in the JSON file as well as a default one Google generated when I created the project.

google1int14

Next up I click the Create Credentials button and select the Service Account key option.  On the Create service account key page I provide a unique name for the service account of Jog Directory Access.

The Role drop down box relates to the new roles that were introduced with Google’s Cloud IAM.

You can think of Google’s Cloud IAM as Google’s version of Amazon Web Services (AWS) IAM  or Microsoft’s Azure Active Directory in how the instance is related to the project which is used to manage the GCP resources.  When a new service account is created a new security principal representing the non-human identity is created in the Google Cloud IAM instance backing the project.

Since my application won’t be interacting with GCP resources, I’ll choose the random role of Logs Viewer.  When I filled in the service account name the service account ID field was automatically populated for me with a value.  The service account ID is unique to the project and represents the security principal for the application.   I choose the option to download the private key as a PKCS12 file because I’ll be using the System.Security.Cryptography.X509Certificates namespace within my application later on.  Finally I click the create button and download the PKCS12 file.

google1int15

The new service account now shows in the credential page.

google1int16

 

Navigating to the IAM & Admin dashboard now shows the application as a security principal within the project.

google1int17

I now need to enable the APIs in my project that I want my applications to access.  For this I navigate to the API & Services dashboard and click the Enable APIs and Services link.

google1int23.png

On the next page I use “admin” as my search term, select the Admin SDK and click the Enable button.  The API is now enabled for applications within the project.

From here I navigate down to the Service accounts page and edit the newly created service account.

google1int19

At this point I’ve created a new project in GCP, created a service account that will represent the demo application, and have given that application the right to impersonate users in my G-Suite directory.  I now need to authorize the application to access the G-Suite’s data via Google’s API.  For that I switch over to the G-Suite Admin Console and authenticate as a super admin and access the Security dashboard.  From there I hit the Advanced Setting option and click the Manage API client access link.

google1int20

On the Manage API client access page I add a new entry using the client ID I pulled previously and granting the application access to the  https://www.googleapis.com/auth/admin.directory.user.readonly scope.  This allows the application to impersonate a user to pull a listing of users from the G-Suite directory.

google1int21

Whew, a lot of new concepts to digest in this entry so I’ll save the review of the application for the next entry.  Here’s a consumable diagram I put together showing the relationship between GCP Projects, G-Suite, and a GCP Organization.  The G-Suite domain acts as a link to the GCP projects.  The G-Suite users can setup GCP projects and have a stub identity (see my first entry LINK) provisioned in the project.  When an service account is created in a project and granted G-Suite Domain-wide Delegation, we use the Client ID associated with the service account to establish an identity for the app in the G-Suite domain which is associated with a scope of authorized access.

google1int22

In this post I covered some basic GCP concepts and saw that the concepts are very similar to both Microsoft and AWS.  I also covered the process to create a service account in GCP and how all the pieces come together to programmatic access to G-Suite resources.  In my next entry I’ll demo some simple .NET applications and walk through the code.

Have a great weekend and go Pats!