AWS Managed Microsoft AD Deep Dive Part 1 – Overview

AWS Managed Microsoft AD Deep Dive  Part 1 – Overview

Welcome back my fellow geeks!

Earlier this year I did a deep dive into Microsoft’s managed Active Directory service, Microsoft Azure Active Directory Domain Services (AAD DS).  I found was a service in its infancy and showing some promise, but very far from being an enterprise-ready service.  I thought it would be fun to look at Amazon’s (which I’ll refer to as Amazon Web Services (AWS) for the rest of the entries in this series) take on a managed Microsoft Active Directory (or as Microsoft is referring to it these days Windows Active Directory).

Unless your organization popped up in the last year or two and went the whole serverless route you are still managing operating systems that require centralized authentication, authorization, and configuration management.  You also more than likely have a ton of legacy/classic on-premises applications that require legacy protocols such as Kerberos and LDAP.  Your organization is likely using Windows Active Directory (Windows AD) to provide these capabilities along with Windows AD’s basic domain name system (DNS) service and centralized identity data store.

It’s unrealistic to assume you’re going to shed all those legacy applications prior to beginning your journey into the public cloud.  I mean heck, shedding the ownership of data centers alone can be a huge cost driver.  Organizations are then faced with the challenge of how to do Windows AD in the public cloud.  Is it best to extend an existing on-premises forest into the public cloud?  What about creating a resource forest with a trust?  Or maybe even a completely new forest with no trust?  Each of these options have positives and negatives that need to be evaluated against organizational requirements across the business, technical, and legal arenas.

Whatever choice you make, it means additional infrastructure in the form of more domain controllers.  Anyone who has managed Windows AD in an enterprise knows how much overhead managing domain controllers can introduce.  Let me clarify that by managing Windows AD, it does not mean opening Active Directory Users and Computers (ADUC) and creating user accounts and groups.  I’m talking about examining performance monitor AD counters and LDAP Debug logs to properly size domain controllers, configuring security controls to comply with PCI and HIPAA requirements or aligning with DISA STIGS, managing updates and patches, and troubleshooting the challenges those bring which requires extensive knowledge of how Active Directory works.  In this day an age IT staff need to be less focused on overhead such as this and more focused on working closely with its business units to drive and execute upon business strategy.  That folks is where managed services shine.

AWS offers an extensive catalog of managed services and Windows AD is no exception.  Included within the AWS Directory Services offerings there is a powerful offering named Amazon Web Services Directory Service for Microsoft Active Directory, or more succinctly AWS Managed Microsoft AD.  It provides all the wonderful capabilities of Windows AD without all of the operational overhead.  An interesting fact is that the service has been around since December 2015 in comparison to Microsoft’s AAD DS which only went into public preview at in 3rd Q 2017.  This head start has done AWS a lot of favors and in this engineer’s opinion, has established AWS Managed Microsoft AD as the superior managed Windows AD service over Microsoft’s AAD DS.  We’ll see why as the series progresses.

Over the course of this series I’ll be performing a similar analysis as I did in my series on Microsoft AAD DS.  I’ll also be examining the many additional capabilities AWS Managed Microsoft AD provides and demoing some of them in action.  My goal is that by the end of this series you understand the technical limitations that come with the significant business benefits of leveraging a managed service.

See you next post!

The Evolution of AD RMS to Azure Information Protection – Part 7 – Deep Dive into cross Azure AD tenant consumption

The Evolution of AD RMS to Azure Information Protection – Part 7 – Deep Dive into cross Azure AD tenant consumption

Each time I think I’ve covered what I want to for Azure Information Protection (AIP), I think of another fun topic to explore.  In this post I’m going to look at how AIP can be used to share information with users that exist outside your tenant.  We’ll be looking at the scenario where an organization has a requirement to share protected content with another organization that has an Office 365 tenant.

Due to my requirements to test access from a second tenant, I’m going to supplement the lab I’ve been using.  I’m adding to the mix my second Azure AD tenant at journeyofthegeek.com.  Specific configuration items to note are as follows:

  • The tenant’s custom domain of journeyofthegeek.com is an Azure AD (AAD)-managed domain.
  • I’ve created two users for testing.  The first is named Homer Simpson (homer.simpson@journeyofthegeek.com) and the second is Bart Simpson (bart.simpson@journeyofthegeek.com).
  • Each user have been licensed with Office 365 E3 and Enterprise Mobility + Security E5 licenses.
  • Three mail-enabled security groups have been created.  The groups are named The Simpsons (thesimpsons@journeyofthegeek.com), JOG Accounting (jogaccounting@journeyofthegeek.com), and JOG IT (jogit@journeyofthegeek.com).
  • Homer Simpson is a member of The Simpsons and JOG Accounting while Bart Simpson is a member of The Simpsons and JOG IT.
  • Two additional AIP policies have been created in addition to the Global policy.  One policy is named JOG IT and one is named JOG Accounting.
  • The Global AIP policy has an additional label created named PII that enforces protection.  The label is configured to detect at least one occurrence of a US social security number.  The document is protection policy allows only members of the The Simpsons group to the role of viewer.
  • The JOG Accounting and JOG IT AIP policies have both been configured with an additional label of either JOG Accounting or JOG IT.  A sublabel for each label has also been created which enforces protection and restricts members of the relevant departmental group to the role of viewer.
  • I’ve repurposed the GIWCLIENT2 machine and have created two local users named Bart Simpson and Homer Simpson.

Once I had my tenant configuration up and running, I initialized Homer Simpson on GIWCLIENT2.  I already had the AIP Client installed on the machine, so upon first opening Microsoft Word, the same bootstrapping process I described in my previous post occurred for the MSIPC client and the AIP client.  Notice that the document has had the Confidential \ All Employees label applied to the document automatically as was configured in the Global AIP policy.  Notice also the Custom Permissions option which is presented to the user because I’ve enabled the appropriate setting in the relevant AIP policies.

7aip1.png

I’ll be restricting access to the document by allowing users in the geekintheweeds.com organization to hold the Viewer role.  The geekintheweeds.com domain is associated with my other Azure AD tenant that I have been using for the lab for this series of posts.  First thing I do is change the classification label from Confidential \ All Employees to General.  That label is a default label provided by Microsoft which has an RMS Template applied that restricts viewers to users within the tenant.

One interesting finding I discovered through my testing is that the user can go through the process of protecting with custom permissions using a label that has a pre-configured template and the AIP client won’t throw any errors, but the custom permissions won’t be applied.  This makes perfect sense from a security perspective, but it would be nice to inform the user with an error or warning.  I can see this creating unnecessary help desk calls with how it’s configured now.

When I attempt to change my classification label to General, I receive a prompt requiring me to justify the drop in classification.  This is yet another setting I’ve configured in my Global AIP policy.  This seems to be a standard feature in most data classification solutions from what I’ve observed in another major vendor.

7aip2.png

After successfully classifying the document with the General label protection is removed from the document. At this point I can apply my custom permissions as seen below.

7aip3.png

I repeated the process for another protected doc named jog_protected_for_Ash_Williams.docx with permissions restricted to ash.williams@geekintheweeds.com.  I packaged both files into an email and sent them to Ash Williams who is a user in the Geek In The Weeds tenant.  Keep in mind the users in the Geek In The Weeds tenant are synchronized from a Windows Active Directory domain and use federated authentication.

After opening Outlook the message email from Homer Simpson arrives in Ash William’s inbox.   At this point I copied the files to my desktop, closed Outlook, opened Microsoft Word and used the “Reset Settings” options of the AIP client, and signed out of my Office profile.

7aip4

At this point I started Fiddler and opened one of the Microsoft Word document. Microsoft Word pops-up a login prompt where I type in my username of ash.williams@geekintheweeds.com and I’m authenticated to Office 365 through the standard federated authentication flow. The document then pops open.

7aip5.png

Examining the Fiddler capture we see a lot of chatter. Let’s take a look at this in chunks, first addressing the initial calls to the AIP endpoint.

7aip6

If you have previous experience with the MSIPC client in the AD RMS world you’ll recall that it makes its calls in the following order:

  1. Searches HKLM registry hive
  2. Searches HKCU registry hive
  3. Web request to the RMS licensing pipeline for the RMS endpoint listed in the metadata attached to the protected document

In my previous deep dives into AD RMS we observed this behavior in action.  In the AIP world, it looks like the MSIPC client performs similarly.  The endpoint we see it first contacting is the Journey of the Geek which starts with 196d8e.

The client first sends an unauthenticated HTTP GET to the Server endpoint in the licensing pipeline. The response the server gives is a list of available SOAP functions which include GetLicensorCertificate and GetServerInfo as seen below.

7aip8.png

The client follows up the actions below:

  1. Now that the client knows the endpoint supports the GetServerInfo SOAP function, it sends an unauthenticated HTTP POST which includes the SOAP action of GetServerInfo.  The AIP endpoint returns a response which includes the capabilities of the AIP service and the relevant endpoints for certification and the like.
  2. It uses that information received from the previous request to send an unauthenticated HTTP POST which includes the SOAP action of ServiceDiscoveryForUser.  The service returns a 401.

At this point the client needs to obtain a bearer access token to proceed.  This process is actually pretty interesting and warrants a closer look.

7aip9

Let’s step through the conversation:

  1. We first see a connection opened to odc.officeapps.live.com and an unauthenticated HTTP GET to the /odc/emailhrd/getfederationprovider URI with query strings of geekintheweeds.com.  This is a home realm discovery process trying to the provider for the user’s email domain.

    My guess is this is MSAL In action and is allowing support for multiple IdPs like Azure AD, Microsoft Live, Google, and the like.  I’ll be testing this theory in a later post where I test consumption by a Google user.

    The server responds with a number of headers containing information about the token endpoints for Azure AD (since this is domain associated with an Azure AD tenant.)

    7aip10.png

  2. A connection is then opened to odc.officeapps.live.com and an unauthenticated HTTP GET to the /odc/emailhrd/getidp with the email address for my user ash.williams@geekintheweeds.com. The response is interesting in that I would have thought it would return the user’s tenant ID. Instead it returns a JSON response of OrgId.

    7aip11.png

    Since I’m a nosey geek, I decided to unlock the session for editing.  First I put in the email address associated with a Microsoft Live.  Instead of OrgId it returned MSA which indicates it detects it as being a Microsoft Live account.  I then plugged in a @gmail.com account to see if I would get back Google but instead I received back neither.  OrgId seems to indicate that it’s an account associated with an Azure AD tenant.  Maybe it would perform alternative steps depending on whether it’s MSA or Azure AD in future steps?  No clue.

  3. Next, a connection is made to oauth2 endpoint for the journeyofthegeek.com tenant. The machine makes an unathenticated requests an access token for the https://api.aadrm.com/ in order to impersonate Ash Williams. Now if you know your OAuth, you know the user needs to authenticate and approve the access before the access token can be issued. The response from the oauth2 endpoint is a redirect over to the AD FS server so the user can authenticate.

    7aip12.png

  4. After the user successfully authenticates, he is returned a security token and redirected back to login.microsoftonline.com where the assertion is posted and the user is successfully authenticated and is returned an authorization code.

    7aip13.png

  5. The machine then takes that authorization code and posts it to the oauth2 endpoint for my journeyofthegeek.com tenant. It receives back an Open ID Connect id token for ash.williams, a bearer access token, and a refresh token for the Azure RMS API.

    7aip14.png

    Decoding the bearer access token we come across some interesting information.  We can see the audience for the token is the Azure RMS API, the issuer of the token is the tenant id associated with journeyofthegeek.com (interesting right?), and the identity provider for the user is the tenant id for geekintheweeds.com.

    7aip15.png

  6. After the access token is obtained the machine closes out the session with login.microsoftonline.com and of course dumps a bunch of telemetry (can you see the trend here?).

    7aip16.png

  7. A connection is again made to odc.officeapps.live.com and the /odc/emailhrd/getfederationprovider URI with an unauthenticated request which includes a query string of geekintheweeds.com. The same process as before takes place.

Exhausted yet?  Well it’s about to get even more interesting if you’re an RMS nerd like myself.

7aip17.png

Let’s talk through the sessions above.

  1. A connection is opened to the geekintheweeds.com /wmcs/certification/server.asmx AIP endpoint with an unauthenticated HTTP POST and a SOAP action of GetServerInfo.  The endpoint responds as we’ve observed previously with information about the AIP instance including features and endpoints for the various pipelines.
  2. A connection is opened to the geekintheweeds.com /wmcs/oauth2/servicediscovery/servicediscovery.asmx AIP endpoint with an unauthenticated HTTP POST and a SOAP action of ServiceDiscoveryForUser.  We know from the bootstrapping process I covered in my last post, that this action requires authentication, so we see the service return a 401.
  3. A connection is opened to the geekintheweeds.com /wmcs/oauth2/certification/server.asmx AIP endpoint with an unauthenticated HTTP POST and SOAP action of GetLicensorCertificate.  The SLC and its chain is returned to the machine in the response.
  4. A connection is opened to the geekintheweeds.com /wmcs/oauth2/certification/certification.asmx AIP endpoint with an unauthenticated HTTP POST and SOAP action of Certify.  Again, we remember from my last post that this requires authentication, so the service again responds with a 401.

What we learned from the above is the bearer access token the client obtained earlier isn’t attended for the geekintheweeds.com AIP endpoint because we never see it used.  So how will the machine complete its bootstrap process?  Well let’s see.

  1. A connection is opened to the journeyofthegeek.com /wmcs/oauth2/servicediscovery/servicediscovery.asmx AIP endpoint with an unauthenticated HTTP POST and SOAP action of ServiceDiscoveryForUser.  The service returns a 401 after which the client makes the same connection and HTTP POST again, but this time including its bearer access token it retrieved earlier.  The service provides a response with the relevant pipelines for the journeyofthegeek.com AIP instance.
  2. A connection is opened to the journeyofthegeek.com /wmcs/oauth2/certification/server.asmx AIP endpoint with an authenticated (bearer access token) HTTP POST and SOAP action of GetLicensorCertificate.  The service returns the SLC and its chain.
  3. A connection is opened to the journeyofthegeek.com /wmcs/oauth2/certification/certification.asmx AIP endpoint with an authenticated (bearer access token) HTTP POST and SOAP action of Certify.  The service returns a RAC for the ash.williams@geekintheweeds.com along with relevant SLC and chain.  Wait what?  A RAC from the journeyofthegeek.com AIP instance for a user in geekintheweeds.com?   Well folks this is supported through RMS’s support for federation.  Since all Azure AD’s in a given offering (commercial, gov, etc) come pre-federated, this use case is supported.
  4. A connection is opened to the journeyofthegeek.com /wmcs/licensing/server.asmx AIP endpoint with an uauthenticated HTTP POST and SOAP action of GetServerInfo.  We’ve covered this enough to know what’s returned.
  5. A connection is opened to the journeyofthegeek.com /wmcs/licensing/publish.asmx AIP endpoint with an authenticated (bearer access token) HTTP POST and SOAP action of GetClientLicensorandUserCertificates.  The server returns the CLC and EUL to the user.

After this our protected document opens in Microsoft Word.

7aip18.png

Pretty neat right? Smart move by Microsoft to take advantage and build upon of the federated capabilities built into AD RMS. This is another example showing just how far ahead of their game the product team for AD RMS was. Heck, there are SaaS vendors that still don’t support SAML, let alone on-premises products from 10 years ago.

In the next few posts (can you tell I find RMS fascinating yet?) of this series I’ll explore how Microsoft has integrated AIP into OneDrive, SharePoint Online, and Exchange Online.

Have a great week!

The Evolution of AD RMS to Azure Information Protection – Part 6 – Deep Dive into Client Bootstrapping

The Evolution of AD RMS to Azure Information Protection – Part 6 – Deep Dive into Client Bootstrapping

Today I’m back with more Azure Information Protection (AIP) goodness.  Over the past five posts I’ve covered the use cases, concepts and migration paths.  Today I’m going to get really nerdy and take a look behind the curtains at how the MSIPC client shipped with Office 2016 interacts with AIP .  I’ll be examining the MSIPC client log and reviewing procmon and Fiddler captures.  If the thought of examining log files and SOAP calls excites you, this is a post for you.  Make sure to take a read through my previous posts to ensure you understand my lab infrastructure and configuration as well as key AIP concepts.

Baselining the Client

Like any good engineer, I wanted to baseline my machine to ensure the MSIPC client was functioning correctly.  Recall that my clients are migrating from an on-premises AD RMS implementation to AIP.  I haven’t completed my removal of AD RMS so the service connection point for on-premises AD RMS is still there and the migration scripts Microsoft provides are still in use.  Let’s take a look at the registry entries that are set via the Migrate-Client and Migrate-User script.  In my last post I covered the purpose of the two scripts.  For the purposes of this post, I’m going to keep it brief and only cover registry entries applicable to the MSIPC client shipped with Office 2016.

  1. Migrate-Client
    • Condition: Runs each computer startup only if it detects it has not run before or the version variable in the script has been changed.
    • Registry Entries Modified:
      • Deletes HKLM\Software\Microsoft\MSIPC\ServiceLocation keys
      • Deletes HKLM\Software\Wow6432Node\Microsoft\MSIPC\ServiceLocation key
      • Deletes HKLM\Software\Microsoft\MSIPC\ServiceLocation\LicensingRedirection key
      • Deletes HKLM\Software\Wow6432Node\Microsoft\MSIPC\ServiceLocation\LicensingRedirection key
      • Add Default value to HKLM\Software\Microsoft\MSIPC\ServiceLocation\EnterpriseCertification key with data value pointing to AIP endpoint for tenant
      • Add Default value to HKLM\Software\Wow6432Node\Microsoft\MSIPC\ServiceLocation\EnterpriseCertification key with data value pointing to AIP endpoint for tenant
      • Add a value for the FQDN and single label URLs to on-premises AD RMS licensing pipeline to HKLM\Software\Microsoft\MSIPC\ServiceLocation\LicensingRedirection key with data values pointing to AIP endpoints for tenant
      • Add a value for the FQDN and single label URLs to on-premises AD RMS licensing pipeline to HKLM\Software\Wow6432NodeMicrosoft\MSIPC\ServiceLocation\LicensingRedirection key with data values pointing to AIP endpoints for tenant
  2. Migrate-User
    • Condition: Runs each user logon only if it detects it has not run before or the version variable in the script has been changed.
    • Registry Entries Modified:
      • Deletes HKCU\Software\Microsoft\Office\16.0\Common\DRM key
      • Deletes HKCU\Software\Classes\Local Settings\Software\Microsoft\MSIPC key
      • Deletes HKCU\Software\Classes\Microsoft.IPViewerChildMenu\shell key
      • Add DefaultServerUrl value to HKCU\Software\Microsoft\Office\16.0\Common\DRM key and set its data value to the AIP endpoint for the tenant
    • Files Modified:
      • Deletes the contents of the %localappdata%\Microsoft\MSIPC folder

A quick review of my client settings validates that all the necessary registry entries are in place and I have no issues consuming and created protected content.

Resetting the Client

If you have administered AD RMS in the past, you will be very familiar with how to re-bootstrap an RMS client.  Microsoft has made that entire process easier by incorporating a “reset” function into the AIP client.  The function can be accessed in Microsoft Office by hitting the drop down arrow for the AIP icon on the toolbar and selecting the Help and Feedback option.

6AIP1.png

After clicking the Help and Feedback option, a new window pops up where you can select the Reset Settings option to which performs a series of changes to the registry, deletions of RMS licenses, and AIP metadata.  Lastly, I log out of the machine.

6AIP2.png

 

Bootstrapping the Client with Azure Information Protection

After logging back in I start up Fiddler, open Microsoft Word, and attempt to open a file that was protected with my AD RMS cluster. The file opens successfully.

One thing to note is if you’re using Windows 10 and Microsoft Edge like I was, you’ll need to take the extra steps outlined here to successfully capture due to the AppContainer Isolation feature added back in Windows 8. If you do not take those extra steps, you’ll get really odd behavior. Microsoft Edge will fail any calls to intranet endpoints (such as AD FS in my case) by saying it can’t contact the proxy. Trying with Internet Explorer will simply cause Fiddler to fail to make the calls and to throw a DNS error. Suffice to say, I spent about 20 minutes troubleshooting the issue before I remembered Fiddler’s dialog box that pops up every new install about AppContainer and Microsoft Edge.

The first thing we’re going to look at is the MSIPC log files which keep track of the client activity. I have to give an applause to whichever engineer over at Microsoft thought it would be helpful to include such a detailed log. If you’ve administered on-premises AD RMS in the past on previous versions of Microsoft Office, you’ll know the joys (pain?) of client side tracing with DebugView.

When we pop open the log we get some great detail as to the client behavior. We can see the client read a number of registry entries. The first thing we see is the client discover that is not initialized so it calls an API to bootstrap the user. Notice in the below that it has identified my user and it’s mentioning OAuth as a method for authentication to the endpoint.

6AIP3.png

Following this we have a few more registry queries to discover the version of the operating system. We then have our first HTTP session opened by the client. I’m pretty sure this session is the initial user authentication to Azure AD in order to obtain a bearer access token for the user to call further APIs

6AIP4.png

Bouncing over to Fiddler we can check out the authentication process. We can see the machine reach out to Azure AD (login.windows.net), perform home realm discovery which Azure AD determines that geekintheweeds.com is configured for federated authentication. The client makes the connection to the AD FS server where the user is seamlessly authenticated via Kerberos. The windowstransport endpoint is called which supports the WS-Trust 1.3 active profile.  In an WS-Trust active flow, the client initiates the request (hence it’s active) vs the passive flow where the service provider initiates the flow.  This is how Office applications support modern (aka federated) authentication.

6AIP5

After the assertion is obtained, it’s posted to the /common/oauth2/token endpoint at login.windows.net.  The assertion is posted within a request for an access token, refresh token, and id token request using the saml1_1-bearer token grant type for the Azure RMS endpoint.

6AIP6.png

The machine is returned an access token, refresh token, and id token.  We can see the token returned is a bearer token allowing client to impersonate my user moving forward.

6AIP7.png

Dumping the access token into the Fiddler TextWizard and decoding the Base64 gives us the details of the token.  Within the token we can see an arm (authenticated method reference) of wia indicating the user authenticated using Windows authentication.  A variety of information about the user is included in the token including UPN, first name, and last name.

6AIP8.png

I’m fairly certain the tokens are cached to a flat file based upon some of the data I did via procmon while the bootstrap process initiated.  You can see the calls to create the file and write to it below.

6AIP9

After the tokens are obtained and cached we see from the log file that the MSIPC client then discovers it doesn’t have machine certificates.  It goes through the process of creating the machine certificates.

6AIP10.png

We now see the MISPC client attempts to query for the SRV record Microsoft introduced with Office 2016 to help with migrations from AD RMS.  The client then attempts discovery of service by querying the RMS-specific registry keys in the HKLM hive and comes across the information we hardcoded into the machine via the migration scripts.  It uses this information to make a request to the non-authenticated endpoint of https://<tenant_specific>/_wmcs/certification/server.asmx.

6AIP11

Bouncing back to Fiddler and continuing the conversation we can see a few different connections are created.  We see one to api.informationprotection.azure.com, another to mobile.pipe.aria.microsoft.com, and yet another to the AIP endpoint for my tenant.

6AIP12.png

I expected the conversation between api.informationprotection.azure.com and the AIP endpoint for my tenant.  The connection to mobile.pipe.aria.microsoft.com interested me.  I’m not sure if it was randomly captured or if it was part of the consumption of protected content.  I found a few Reddit posts where people were theorizing it has something to do with how Microsoft consumes telemetry from Microsoft Office.  As you could probably guess, this piqued my interest to know what exactly Microsoft was collecting.

We can see from the Fiddler captures that an application on the client machine is posting data to https://mobile.pipe.aria.microsoft.com/Collector/3.0/.  Examination of the request header shows the user agent as AriaSDK Client and the sdk-version of ACT-Windows Desktop.  This looks to be the method in which the telemetry agent for Office collects its information.

6AIP13.png

If we decode the data within Fiddler and dump both sets of data to Notepad we get some insight into what’s being pulled. Most of the data is pretty generic in that there is information about the version of Word I’m using, the operating system version, information that my machine is a virtual machine, and some activity IDs which must relate to something MS holds on their end. The only data point I found interesting was that my tenant ID is included in it. Given tenant id isn’t exactly a secret, it’s still interesting it’s being collected. It must be fascinating to see this telemetry at scale. Interesting stuff either way.

6AIP14.png

Continuing the conversation, let’s examine the chatter with my tenant’s AIP endpoint since the discovery was requested by the MSIPC client.  We see a SOAP request of GetServerInfo posted to https://<tenant_specific>/_wmcs/certification/server.asmx.  The response we receive from the endpoint has all the information our RMS client will need to process the request.  My deep dive into AD RMS was before I got my feet with Fiddler so I’ve never examined the conversations with the SOAP endpoints within AD RMS.  Future blog post maybe?  Either way, I’ve highlighted the interesting informational points below.  We can see that the service is identifying itself as RMS Online, has a set of features that cater to modern authentication, runs in Cryptomode 2, and supports a variety of authentication methods.  I’m unfamiliar with the authentication types beyond X509 and OAuth 2.  Maybe carry overs from on-prem?  Something to explore in the future.  The data boxed in red are all the key endpoints the RMS client needs to know to interact with the service moving forward.  Take note the request at this endpoint doesn’t require any authentication.  That comes in later requests.

6AIP15.png

After the response is received the MSIPC writes a whole bunch of registry entries to the HKCU hive for the user to cache all the AIP endpoint information it discovered.  It then performs a service discovery against the authenticated endpoint using its bearer token it cached to the tokencache file.

6AIP16.png

Once the information is written to the registry, the client initiates a method called GetCertAndLicURLsWithNewSD.  It uses the information it discovered previously to query the protected endpoint https://<tenant_specific>/_wmcs/oauth2/servicediscovery/servicediscovery.asmx.  Initially it receives a 401 unauthorized back with instructions to authenticate uses a bearer token.

6AIP17.png

The client tries again this time providing the bearer token it obtained earlier and placed in the tokencache file.  The SOAP action of ServiceDiscoveryForUser is performed and the client requests the specific endpoints for certification, licensing, and the new tracking portal feature of AIP.

6AIP18.png

The SOAP response contains the relevant service endpoints and user for which the query applied to.

6AIP19.png

The MSIPC client then makes a call to /_wmcs/oauth2/certification/server.asmx with a SOAP request of GetLicensorCertificate.  I won’t break that one down response but it returns the SLC certificate chain in XrML format.  For my tenant this included both the new SLC I generated when I migrated to AIP as well as the SLC from my on-premises AD RMS cluster that I uploaded.

6AIP20.png

The MISPC log now shows a method called GetNewRACandCLC being called which is used to obtain a RAC and CLC. This is done by making a call to the certification pipeline.

6AIP21.png

The call to /_wmcs/oauth2/certification/certification.asmx does exactly as you would expect and calls the SOAP request of Certify. This included my user’s RAC, and both SLCs and certificates in that chain. The one interesting piece in the response was a Quota tag as seen below. I received back five certificates, so maybe there is a maximum that can be returned? If you have more than 4 on-premises AD RMS clusters you’re consolidating to AIP, you might be in trouble. 🙂

6AIP22.png

The MISPC log captures the successful certification and logs information about the RAC.

6AIP23.png

Next up the client attempts to obtain a CLC by calling continuing with the GetNewRACandCLC method. It first calls the /_wmcs/licensing/server.asmx pipeline and makes a GetServerInfo SOAP request which returns the same information we saw in the last request to server.asmx. This request isn’t authenticated and the information returned is written to the HKCU hive for the user.

6AIP24.png

The service successfully returns the users CLC.  The last step in the process is the MSIPC service requests the RMS templates associated with the user.  You can see the template that is associated custom AIP classification label I created.

6AIP25.png

Last but not least, the certificates are written to the %LOCALAPPDATA%\Microsoft\MSIPC directory.

6AIP26.png

Conclusion

Very cool stuff right? I find it interesting in that the MSIPC client performs pretty much the same way it performs with on-premises exempting some of the additional capabilities introduced such as the search for the SRV DNS records and the ability to leverage modern authentication via the bearer token. The improved log is a welcome addition and again, stellar job to whatever engineer at Microsoft thought it would be helpful to include all the detail that is included in that log.

If you’ve used AD RMS or plan to use AIP and haven’t peeked behind the curtains I highly recommend it. Seeing how all the pieces fit together and how a relatively simple web service and a creative use of certificates can provide such a robust and powerful security capability will make your appreciate the service AD RMS tried to be and how far ahead of its time it was.

I know I didn’t cover the calls to the AIP-classification specific web calls, but I’ll explore that in my next entry.  Hopefully you enjoyed nerding out on this post as much as I did. Have a great week and see you next post!

The Evolution of AD RMS to Azure Information Protection – Part 5 – Client-Side Migration and Testing

The Evolution of AD RMS to Azure Information Protection – Part 5 – Client-Side Migration and Testing

Welcome to the fifth entry in my series on the evolution of Microsoft’s Active Directory Rights Management Service (AD RMS) to Azure Information Protection (AIP).  We’ve covered a lot of material over this series.  It started with an overview of the service, examined the different architectures, went over key planning decisions for the migration from AD RMS to AIP, and left off with performing the server-side migration steps.  In this post we’re going to round out the migration process by performing a staged migration of our client machines.

Before we jump into this post, I’d encourage you to refresh yourself with my lab setup and the users and groups I’ve created, and finally the choices I made in the server side migration steps.  For a quick reference, here is the down and dirty:

  • Windows Server 2016 Active Directory forest named GEEKINTHEWEEDS.COM with servers running Active Directory Domain Services (AD DS), Active Directory Domain Name System (AD DNS), Active Directory Certificate Services (AD CS), Active Directory Federation Services (AD FS), Active Directory Rights Management Services (AD RMS), Azure Active Directory Connect, and Microsoft SQL Server Express.
  • Forest is configured to synchronize to Azure AD using Azure AD Connect and uses federated authentication via AD FS
  • Users Jason Voorhies and Ash Williams will be using a Windows 10 client machine with Microsoft Office 2016 named GWCLIENT1
  • Users Theodore Logan and Michael Myers will be using a Windows 10 client machine with Microsoft Office 2016 named GWCLIENT2
  • Users Jason Voorhies and Theodore Logan are in the Information Technology Windows Active Directory (AD) group
  • Users Ash Williams and Michael Myers are in the Accounting Windows AD group
  • Onboarding controls have been configured for a Windows AD group named GIW AIP Users of which Jason Voorhies and Ash Williams are members

Prepare The Client Machine

To take advantage of the new features AIP brings to the table we’ll need to install the AIP client. I’ll be installing the AIP client on GWCLIENT1 and leaving the RMS client installed by Office 2016 on GWCLIENT2. Keep in mind the AIP client includes the RMS client (sometimes referred to as MSIPC) as well.

If you recall from my last post, I skipped a preparation step that Microsoft recommended for client machines. The step has you download a ZIP containing some batch scripts that are used for performing a staged migration of client machines and users. The preparation script Microsoft recommends running prior to any server-side configuration Prepare-Client.cmd.  In an enterprise environment it makes sense but for this very controlled lab environment it wasn’t needed prior to server-side configuration. It’s a simple script that modifies the client registry to force the RMS client on the machines to go to the on-premises AD RMS cluster even if they receive content that’s been protected using an AIP subscription. If you’re unfamiliar with the order that the MSIPC client discovers an AD RMS cluster I did an exhaustive series a few years back.  In short, hardcoding the information to the registry will prevent the client from reaching out to AIP and potentially causing issues.

As a reminder I’ll be running the script on GIWCLIENT1 and not on GIWCLIENT2.  After the ZIP file is downloaded and the script is unpackaged, it needs to be opened with a text editor and the OnPremRMSFQDN and CloudRMS variables need to be set to your on-premises AD RMS cluster and AIP tenant endpoint. Once the values are set, run the script.

5AIP1.png

Install the Azure Information Protection Client

Now that the preparation step is out of the way, let’s get the AIP client installed. The AIP client can be downloaded directly from Microsoft. After starting the installation you’ll first be prompted as to whether you want to send telemetry to Microsoft and use a demo policy.  I’ll be opting out of both (sorry Microsoft).

5AIP2.png

After a minute or two the installation will complete successfully.

5AIP3.png

At this point I log out of the administrator account and over to Jason Voorhies. Opening Windows Explorer and right-clicking a text file shows we now have the classify and protect option to protect and classify files outside of Microsoft Office.

5AIP4.png

Testing the Client Machine Behavior Prior to Client-Side Configuration

I thought it would be fun to see what the client machine’s behavior would be after the AIP Client was installed but I hadn’t finished Microsoft’s recommended client-side configuration steps. Recall that GIWCLIENT1 has been previously been bootstrapped for the on-premises AD RMS cluster so let’s reset the client after observing the current state of both machines.

Notice on GWICLIENT1 the DefaultServer and DefaultServerUrl in the HKCU\Software\Microsoft\Office\16.0\Common\DRM do not exist even though the client was previously bootstrapped for the on-premises AD RMS instance. On GIWCLIENT2, which has also been bootstrapped, has the entries defined.

5AIP5.png

I’m fairly certain AIP cleared these out when it tried to activate when I started up Microsoft Word prior to performing these steps.

Navigating to HKCU\Software\Classes\Local Settings\Software\Microsoft\MSIPC shows a few slight differences as well. On GIWCLIENT1 there are two additional entries, one for the discovery point for Azure RMS and one for JOG.LOCAL’s AD RMS cluster. The JOG.LOCAL entry exists on GIWCLIENT1 and not on the GIWCLIENT2 because of the baseline testing I did previously.

5AIP6.png

5AIP7.png

Let’s take a look at the location the RMS client stores its certificates which is %LOCALAPPDATA%\Microsoft\MSIPC.  On both machines we see the expected copy of the public-key CLC certificate, the machine certificate, RAC, and use licenses for documents that have been opened.  Notice that even though the AD RMS cluster is running in Cryptographic Mode 1, the machine still generates a 2048-bit key as well.

5AIP8.png

5AIP9

Now that the RMS Client is reset on GIWCLIENT1, let’s go ahead and see what happens the RMS client tries to do a fresh activation after having AIP installed but the client-side configuration not yet completed.

After opening Microsoft Word I select to create a new document. Notice that the labels displayed in the AIP bar include a custom label I had previously defined in the AIP blade.

5AIP10.png

I then go back to the File tab on the ribbon and attempt to use the classic way of protecting a document via the Restrict Access option.

5AIP11.png

After selecting the Connect to Rights Management Servers and get templates option the client successfully bootstraps back to the on-premises AD RMS cluster as can be seen from the certificates available to the client and that all necessary certificates were re-created in the MISPC directory.

5AIP12

5AIP13.png

That’s Microsoft Office, but what about the scenario where I attempt to use the AIP client add in for Windows Explorer?

To test this behavior I created a PDF file named testfile.pdf.  Right-clicking and selecting the Classify and protect option opens the AIP client to display the default set of labels as well as a new GIW Accounting Confidential label.

5AIP14.png

If I select that label and hit Apply I receive the error below.

5AIP15.png

The template can’t be found because the client is trying to pull it from my on-premises AD RMS cluster.  Since I haven’t run the scripts to prepare the client for AIP, the client can’t reach the AIP endpoints to find the template associated with the label.

The results of these test tell us two things:

  1. Installing the AIP Client on a client machine that already has Microsoft Office installed and configured for an on-premises AD RMS cluster won’t break the client’s integration with that on-premises cluster.
  2. The AIP client at some point authenticated to the Geek In The Weeds Azure AD tenant and pulled down the classification labels configured for my tenant.

In my next post I’ll be examining these findings more deeply by doing a deep dive of the client behavior using a combination of procmon, Fiddler, and WireShark to analyze the AIP Client behavior.

Performing Client-Side Configuration

Now that the client has been successfully installed we need to override the behavior that was put in place with the Prepare-Client batch file earlier.  If we wanted to redirect all clients across the organization that were using Office 2016, we could use the DNS SRV record option listed in the migration article.  This option indicates Microsoft has added some new behavior to the RMS Client installed with Office 2016 such that it will perform a DNS lookup of the SRV record to see a migration has occurred.

For the purposes of this lab I’ll be using the Microsoft batch scripts I referenced earlier.  To override the behavior we put in place earlier with the Prepare-Client.cmd batch script, we’ll need to run both the Migrate-Client and Migrate-User scripts.  I created a group policy object (GPO) that uses security filtering to apply only to GIWCLIENT1 to run the Migrate-Client script as a Startup script and a GPO that uses security filtering to apply only to GIW AIP Users group which runs the Migrate-User script as a Login script.  This ensures only GIWCLIENT2 and Jason Voorhies and Ash Williams are affected by the changes.

You may be asking what do the scripts do?  The goal of the two scripts are to ensure the client machines the users log into point the users to Azure RMS versus an on-premises AD RMS cluster.  The scripts do this by adding and modifying registry keys used by the RMS client prior to the client searching for a service connection point (SCP).  The users will be redirected to Azure RMS when protecting new files as well as consuming files that were previously protected by an on-premises AD RMS cluster.  This means you better had performed the necessary server-side migration I went over previously, or else your users are going to be unable to consume previously protected content.

We’ll dig more into AIP/Office 2016 RMS Client discovery process in the next post.

Preparing Azure Information Protection Policies

Prior to testing the whole package, I thought it would be fun to create some AIP policies. By default, Microsoft provides you with a default AIP policy called the Global Policy. It comes complete with a reasonably standard set of labels, with a few of the labels having sublabels that have protection in some circumstances. Due to the migration path I undertook as part of the demo, I had to enabled protection for All Employees sublabels of both the Confidential and Highly Confidential labels.

5AIP16.png

In addition to the global policy, I also created two scoped policies. One scoped policy applies to users within the GIW Accounting group and the other applies to users within the GIW Information Technology group. Each policy introduces another label and sublabel as seen in the screenshots below.

5AIP17.png

5AIP18.png

Both of the sublabels include protection restricting members of the relevant groups to the Viewer role only. We’ll see these policies in action in the next section.

Testing the Client

Preparation is done, server-side migration has been complete, and our test clients and users have now been completed the documented migration process. The migration scripts performed the RMS client reset so no need to repeat that process.

For the first test, let’s try applying protection to the testfile.txt file I created earlier. Selecting the Classify and protect option opens up the AIP Client and shows me the labels configured in my tenant that support classification and protection. Recall from the AIP Client limitations different file types have different limitations. You can’t exactly append any type of metadata to content of a text file now can you?

5AIP19.png

 

Selecting the IT Staff Only sublabel of the GIW IT Staff label and hitting the apply button successfully protects the text file and we see the icon and file type for the file changes.  Opening the file in Notepad now displays a notice the file is protected and the data contained in the original file has been encrypted.

5AIP20.png

We can also open the file with the AIP Viewer which will decrypt the document and display the content of the text file.

5AIP24

Next we test in Microsoft Word 2016 by creating a new document named AIP_GIW_ALLEMP and classifying it with the High Confidential All Employees sublabel.  The sublabel adds protection such that all users in the GIW Employees group have Viewer rights.

5AIP21

5AIP22

Opening the AIP_GIW_ALLEMP Word document that was protected by Jason Voorhies is successful and it shows Ash Williams has viewer rights for the file.

5AIP23.png

Last but not least, let’s open the a document we previously protected with AD RMS named GIW_GIWALL_ADRMS.DOCX.  We’re able to successfully open this file because we migrated the TPD used for AD RMS up to AIP.

5AIP25.png

At this point we’ve performed all necessary steps up the migration.  What you have left now is cleanup steps and planning for how you’ll complete the rollout to the rest of your user base.  Not bad right?

Over the next few posts ‘ll be doing a deep dive of the RMS Client behavior when interacting with Azure Information Protection.   We’ll do some procmon captures to the behavior of the client when it’s performing its discovery process as well as examining the web calls it makes to Fiddler.  I’ll also spend some time examining the AIP blade and my favorite feature of AIP, Tracking and Revocation.

See you next time!

 

Deep Dive into Azure AD Domain Services – Part 3

Deep Dive into Azure AD Domain Services  – Part 3

Well folks, it’s time to wrap up this series on Azure Active Directory Domain Services (AAD DS).  In my first post I covered the basic configurations of the managed domain and in my second post took a look at how well Microsoft did in applying security best practices and complying with NIST standards.  In this post I’m going to briefly cover the LDAPS over the Internet capability, summarize some key findings, and list out some improvements I’d like to see made to the service.

One of the odd features Microsoft provides with the AAD DS service is the ability to expose the managed domain over LDAPS to the Internet.  I really am lost as to the use case that drove the feature.  LDAP is very much a legacy on-premises protocol that has no place being exposed to risks of the public Internet.  It’s the last thing that should the industry should be encouraging.  Just because you can, doesn’t mean you should.   Now let me step off the soap box and let’s take a look at the feature.

As I covered in my last post LDAPS is not natively enabled in the managed domain.  The feature must be configured and enabled through the Azure Portal.  The configuration consists of uploading the private key and certificate the service will use in the form of a PKCS12 file (*.PFX).  The certificate has a few requirements that are outlined in the instructions above.  After the certificate is validated, it takes about 10-15 minutes for the service to become available.  Beyond enabling the service within the VNet, you additionally have the option to expose the LDAPS endpoint to the Internet.

3aads1.png

Microsoft provides instructions on how to restrict access to the endpoint to trusted IPs via a network security group (NSG) because yeah, exposing an LDAP endpoint to the Internet is just a tad risky.  To lock it down you simply associate an NSG with the subnet AAD DS is serving.  Once that is done enable the service via the option in the image above and wait about 10 minutes.  After the service is up, register a external DNS record for the service that points to the IP address noted under the properties section of the AADS blade and you’re good to go.

For my testing purposes, I locked the external LDAPS endpoint down to the public IP address my Azure VM was SNATed to.  After that I created an entry in the host file of the VM that matched the external DNS name I gave the service (whyldap.geekintheweeds.com) to the public IP address of the LDAPS endpoint in order to bypass the split-brain DNS challenge.  Initiating a connection from LDP.EXE was a success.

3aads2.png

Now that we know the service is running, let’s check out what the protocol support and cipher suite looks like.

3aads3.png

Again we see the use of deprecated cipher suites. Here the risk is that much greater since a small mistake with an NSG could expose this endpoint directly to the Internet.  If you’re going to use this feature, please just don’t.  If you’re really determined to, don’t screw up your NSGs.

This series was probably one of the more enjoyable series I’ve done since I knew very little about the AAD DS offering. There were a few key takeaways that are worth sharing:

  • The more objects in the directory, the more expensive the service.
  • Users and groups can be created directly in managed domain after a new organizational unit is created.
  • Password and lockout policy is insanely loose to the point where I can create an account with a three character password (just need to meet complexity requirements) and accounts never lockout.  The policy cannot be changed.
  • RC4 encryption ciphers are enabled and cannot be disabled.
  • NTLMv1 is enabled and cannot be disabled.
  • The service does not support smart-card enforced users.  Yes, that includes both the users synchronized from Azure AD as well as any users you create directly in the managed domain.  If I had to guess, it’s probably due to the fact that you’re not a Domain Admin so hence you can’t add to the NTAuth certificate store.
  • LDAPS is not enabled by default.
  • Schema extensions are not supported.
  • Account-Based Kerberos Delegation is not supported.
  • If you are syncing identities to Azure AD, you’ll also need to synchronize your passwords.
  • The managed domain is very much “out of the box” defaults.
  • Microsoft creates a “god” account which is a permanent member of every privileged group in the forest
  • Recovery of deleted objects created directly in the managed domain is not possible.  The rights have not been delegated to the AADC Administrator.
  • The service does not allow for Active Directory trusts
  • SIDHistory attribute of users and groups sourced from Azure AD is populated with Primary Group from on-premises domain

My verdict on AAD DS is it’s not a very useful service in its current state.  Beyond small organizations, organizations that have very little to no requirements on legacy infrastructure, organizations that don’t have strong security requirements, and dev/qa purposes I don’t see much of a use for it right now.  It comes off as a service in its infancy that has a lot of room to grow and mature.  Microsoft has gone a bit too far in the standardization/simplicity direction and needs to shift a bit in the opposite direction by allowing for more customization, especially in regards to security.

I’d really like to see Microsoft introduce the capabilities below.   All of them should  exposed via the resource blade in the Azure Portal if at all possible.  It would provide a singular administration point (which seems to be the strategy given the move of Azure AD and Intune to the Azure Portal) and would allow Microsoft to control how the options are enabled in the managed domain.  This means no more administrators blowing up their Active Directory forest because they accidentally shut off all the supported cipher suites for Kerberos.

  • Expose Domain Controller Event Logs to Azure Portal/Graph API and add support for AAD DS Power BI Dashboards
  • Support for Active Directory trusts
  • Out of the box provide a Red Forest model (get rid of that “god” account)
  • Option to disable risky cipher suites for both Kerberos and LDAPS
  • Option to harden the password and lockout policy
  • Option to disable NTLMv1
  • Option to turn on LDAP Debug Logging
  • Option to direct Domain Controller event logs to a SIEM
  • Option to restore deleted users and groups that were created directly in the managed domain.  If you’re allow creation, you need to allow for restoration.
  • Removal of Internet-accessible LDAPS endpoint feature or at least somehow incorporate the NSG lockdown feature directly into the AAD DS blade.

While the service has a lot of room for improvement the direction of a managed Windows AD offering is spot on.  In the year 2018, there is no reason Windows AD shouldn’t be offered as a managed service.  The direction Microsoft has gone by sourcing the identities and credentials from Azure AD is especially creative.  It’s a solid step in the direction of creating a singular centralized identity service that provides both legacy and modern protocols.  I’ll be watching this service closely as Microsoft builds upon it for the next few months.

Thanks and see you next post!

Integrating Azure AD and G-Suite – Google API Integration Part 1

Integrating Azure AD and G-Suite – Google API Integration Part 1

Hi everyone,

Welcome to the second post in my series on the integration between Azure Active Directory (Azure AD) and Google’s G-Suite (formally named Google Apps).  In my first entry I covered the single sign-on (SSO) integration between the two solutions.  This included a brief walkthrough of the configuration and an explanation of how the SAML protocol is used by both solutions to accomplish the SSO user experience.  I encourage you to read through that post before you jump into this.

So we have single sign-on between Azure AD and G-Suite, but do we still need to provision the users and group into G-Suite?  Thanks to Google’s Directory Application Programming Interface (API) and Azure Active Directory’s (Azure AD) integration with it, we can get automatic provisioning into G-Suite .  Before I cover how that integration works, let’s take a deeper look at Google’s Cloud Platform (GCP) and its API.

Like many of the modern APIs out there today, Google’s API is web-based and robust. It was built on Google’s JavaScript Object Notation (JSON)-based API infrastructure and uses Open Authorization 2.0 (OAuth 2.0) to allow for delegated access to an entities resources stored in Google. It’s nice to see vendors like Microsoft and Google leveraging standard protocols for interaction with the APIs unlike some vendors… *cough* Amazon *cough*. Google provides software development kits (SDKs) and shared libraries for a variety of languages.

Let’s take a look at the API Explorer.  The API Explorer is a great way to play around with the API without the need to write any code and to get an idea of the inputs and outputs of specific API calls.  I’m first going to do something very basic and retrieve a listing of users in my G-Suite directory.  Once I access the API explorer I hit the All Versions menu item and select the Admin Directory API.

google1int1

On the next screen I navigate down to the directory.users.list method and select it.  On the screen that follows I’m provided with a variety of input fields.  The data I input into these fields will affect what data is returned to me from the API. I put the domain name associated with my G-Suite subscription and hit the Authorize and Execute button.  A new window pops up which allows me to configure which scope of access I want to grant the API Explorer.  I’m going to give it just the scope of https://www.googleapis.com/auth/admin.directory.user.readonly.

google1int2

I then hit the Authorize and Execute button and I’m prompted to authenticate to Google and delegate API Explorer to access data I have permission to access in my G-Suite description.   Here I plug in the username and password for a standard user who isn’t assigned to any G-Suite Admin Roles.

google1int3

After successfully authenticating, I’m then prompted for consent to delegate API Explorer to view the users configured in the user’s G-Suite directory.

google1int4

I hit the allow button, the request for delegated access is complete, and a listing of users within my G-Suite directory are returned in JSON format.

google1int5

Easy right?  How about we step it up a notch and create a new user.  For that operation I’ll be delegating access to API Explorer using an account which has been granted the G-Suite User Management Admin role.  I navigate back to the main list of methods and choose the directory.users.insert method.  I then plug in the required values and hit the Authorize and Execute button.  The scopes menu pops up and I choose the https://www.googleapis.com/auth/admin.directory.user scope to allow for provisioning of the user and then hit the Authorize and Execute button.  The request is made and a successful response is returned.

google1int6

Navigating back to G-Suite and looking at the listing of users shows the new user Marge Simpson as appearing as created.

google1int7

Now that we’ve seen some simple samples using API Explorer let’s talk a bit about how you go about registering an application to interact with Google’s API as well as covering some basic Google Cloud Platform (GCP) concepts.

First thing I’m going to do is navigate to Google’s Getting Started page and create a new project.  So what is a project?  This took a bit of reading on my part because my prior experience with GCP was non-existent.  Think of a Google Project like an Amazon Web Services (AWS) account or a Microsoft Azure Subscription.  It acts as a container for billing, reporting, and organization of GCP resources.  Projects can be associated with a Google Cloud Organization (similar to how multiple Azure subscriptions can be associated with a single Microsoft Azure Active Directory (Azure AD) tenant) which is a resource available for a G-Suite subscription or Google Cloud Identity resource.  The picture below shows the organization associated with my G-Suite subscription.

google1int8

Now that we have the concepts out of the way, let’s get back to the demo. Back at the Getting Started page, I click the Create a new project button and authenticate as the super admin for my G-Suite subscription. I’ll explain why I’m using a super admin later. On the next screen I name the project JOG-NET-CONSOLE and hit the next button.

google1int9

The next screen prompts me to provide a name which will be displayed to the user when the user is prompted for consent in the instance I decide to use an OAuth flow which requires user consent.

google1int10

Next up I’m prompted to specify what type of application I’m integrated with Google.  For this demonstration I’ll be creating a simple console app, so I’m going to choose Web Browser simply to move forward.  I plug in a random unique value and click Create.

google1int11

After creation is successful, I’m prompted to download the client configuration and provided with my Client ID and Client Secret. The configuration file is in JSON format that provides information about the client’s registration and information on authorization server (Google’s) OAuth endpoints. This file can be consumed directly by the Google API libraries when obtaining credentials if you’re going that route.

google1int12

For the demo application I’m building I’ll be using the service account scenario often used for server-to-server interactions. This scenario leverages the OAuth 2.0 Client Credentials Authorization Grant flow. No user consent is required for this scenario because the intention of the service account scenario is it to access its own data. Google also provides the capability for the service account to be delegated the right to impersonate users within a G-Suite subscription. I’ll be using that capability for this demonstration.

Back to the demo…

Now that my application is registered, I now need to generate credentials I can use for the service account scenario. For that I navigate to the Google API Console. After successfully authenticating, I’ll be brought to the dashboard for the application project I created in the previous steps. On this page I’ll click the Credentials menu item.

google1int13

The credentials screen displays the client IDs associated with the JOG-NET-CONSOLE project.  Here we see the client ID I received in the JSON file as well as a default one Google generated when I created the project.

google1int14

Next up I click the Create Credentials button and select the Service Account key option.  On the Create service account key page I provide a unique name for the service account of Jog Directory Access.

The Role drop down box relates to the new roles that were introduced with Google’s Cloud IAM.

You can think of Google’s Cloud IAM as Google’s version of Amazon Web Services (AWS) IAM  or Microsoft’s Azure Active Directory in how the instance is related to the project which is used to manage the GCP resources.  When a new service account is created a new security principal representing the non-human identity is created in the Google Cloud IAM instance backing the project.

Since my application won’t be interacting with GCP resources, I’ll choose the random role of Logs Viewer.  When I filled in the service account name the service account ID field was automatically populated for me with a value.  The service account ID is unique to the project and represents the security principal for the application.   I choose the option to download the private key as a PKCS12 file because I’ll be using the System.Security.Cryptography.X509Certificates namespace within my application later on.  Finally I click the create button and download the PKCS12 file.

google1int15

The new service account now shows in the credential page.

google1int16

 

Navigating to the IAM & Admin dashboard now shows the application as a security principal within the project.

google1int17

I now need to enable the APIs in my project that I want my applications to access.  For this I navigate to the API & Services dashboard and click the Enable APIs and Services link.

google1int23.png

On the next page I use “admin” as my search term, select the Admin SDK and click the Enable button.  The API is now enabled for applications within the project.

From here I navigate down to the Service accounts page and edit the newly created service account.

google1int19

At this point I’ve created a new project in GCP, created a service account that will represent the demo application, and have given that application the right to impersonate users in my G-Suite directory.  I now need to authorize the application to access the G-Suite’s data via Google’s API.  For that I switch over to the G-Suite Admin Console and authenticate as a super admin and access the Security dashboard.  From there I hit the Advanced Setting option and click the Manage API client access link.

google1int20

On the Manage API client access page I add a new entry using the client ID I pulled previously and granting the application access to the  https://www.googleapis.com/auth/admin.directory.user.readonly scope.  This allows the application to impersonate a user to pull a listing of users from the G-Suite directory.

google1int21

Whew, a lot of new concepts to digest in this entry so I’ll save the review of the application for the next entry.  Here’s a consumable diagram I put together showing the relationship between GCP Projects, G-Suite, and a GCP Organization.  The G-Suite domain acts as a link to the GCP projects.  The G-Suite users can setup GCP projects and have a stub identity (see my first entry LINK) provisioned in the project.  When an service account is created in a project and granted G-Suite Domain-wide Delegation, we use the Client ID associated with the service account to establish an identity for the app in the G-Suite domain which is associated with a scope of authorized access.

google1int22

In this post I covered some basic GCP concepts and saw that the concepts are very similar to both Microsoft and AWS.  I also covered the process to create a service account in GCP and how all the pieces come together to programmatic access to G-Suite resources.  In my next entry I’ll demo some simple .NET applications and walk through the code.

Have a great weekend and go Pats!

Integrating Azure AD and AWS – Part 4

Integrating Azure AD and AWS – Part 4

We’ve reached the end of the road for my series on integrating Azure Active Directory (Azure AD) and Amazon Web Services (AWS) for single sign-on and role management. In part 1 I walked through the many reasons the integration is worth looking at if your organization is consuming both clouds. In part 2 I described the lab I used to for this series, described the different way application identities (service accounts for those of you in the Microsoft space) are handled in Active Directory Domain Services versus Azure AD, and walked through what a typical application identity looks like in Azure AD. In part 3 I walked through a portion of the configuration steps, did a deep dive into the Azure AD and AWS federation metadata, examined a SAML assertion, and configured the AWS end of the federated trust through the AWS Management Console. This included creation of an identity provider representing the Azure AD tenant and creation of a new IAM role for users within the Azure AD tenant to assert.

In this final post I’ll cover the remainder of the configuration, describe the “provisioning” capabilities of Azure AD in this integration, and pointing out some of the issues with the recommended steps in the Microsoft tutorial.

Before I continue with the configuration, let me cover what I’ve done so far.

  • Part 2
    • Added the AWS application from the Azure AD Application Gallery through the Azure Portal.
  • Part 3
    • Assigned an Azure Active Directory user to the application through the Azure Portal.
    • Configured the Azure AD to pass the Role and RoleSessionName claims through the Azure Portal.
    • Created the SAML identity provider representing Azure AD in the AWS Management Console.
    • Created an AWS IAM Role and associated it with the identity provider representing Azure AD in the AWS Management Console.

At this point JoG users can assert their identity to their heart’s content but we don’t have a list of what AWS IAM roles stored in Azure AD for our users to assert.  So how do we assert a role from Azure AD if the listing of the roles exists in AWS?  The wonderful concept of application programmatic interfaces (APIs) swoops in and saves the day.  Don’t get me wrong, if you hate yourself you can certainly provision them manually by modifying the application manifest file every time a new role is created or deleted.  However, there is an easier route of having Azure AD pick up those roles directly from AWS on an automated schedule.  How does this work?  Well nothing works better than demonstrating how the roles can be queried from the AWS API.

The AWS SDK for .NET makes querying the API incredibly easy.  We’re not stuck worrying about assembling the request and signing it.  As you can see below the script is six lines of code in PowerShell.

Script.png

The result is a listing of the roles configured in AWS which includes the AzureADEC2Admins role I created earlier.  This example demonstrates the power a robust API brings to the table when integrating cloud services.

2

When Microsoft speaks of provisioning in regards to the AWS integration, they are talking about provisioning the roles defined in AWS to the the application manifest file in Azure AD.  This provides us with the ability to assign the roles from within the Azure Portal as we’ll see later.  This differs from many of the Azure AD integrations I’ve observed in the past where it will provision a record for the user into the software as a service (SaaS) offering.  Below is a simple diagram of the provisioning process.3

To do support provisioning we need to navigate to the AWS Management Console, open the Services Menu, and select IAM.  We then select Users and hit the Add User button.  I named the user AzureAD, gave it programmatic access type, and attached the IAMReadOnlyAccess policy.  AWS then presented me with the access key ID and secret access key I’ll need to provide to Azure AD.  Yes, we are going to follow security best practices and provide the account with the minimum rights and permissions it needs to provide the functionality.  The Microsoft tutorial instructs you to generate the credentials under the context of the AWS administrator effectively giving the application full rights to the AWS account.  No Microsoft, just no.

I next bounce back to the Azure portal and to the AWS application configuration.  From here I select the Provisioning option, switch the drop-down box to Automatic, and plug the access key ID into the clientsecret field and the secret access key into the secret token field.  A quick test connection shows success and I then save the configuration.  Note that you must first save the configuration before you can turn on the synchronization.

4

After the screen refreshes I move down to the Settings section and turn the Provisioning Status to On and set the Scope to Sync only assigned users and groups (kind of a moot point for this, but oh well).  I then Save the configuration once again and give it about 10 minutes to pull down the roles.

I then navigate back to the Users and Groups section and edit the Rick Sanchez assignment.  Hitting the role option now shows me the AzureADEC2Admins role I configured in AWS IAM.


5.png

Let’s take another look at the service principle representing the AWS application in PowerShell.  Using the Azure AD PowerShell cmdlets I referenced in entry 2 we connect to Azure AD and run the cmdlet Get-AzureADServicePrincipal which when run shows the manifest has been updated to include the newly synchronized application role.

6

We’ve configured the SAML trust on both ends, defined the necessary attributes, setup synchronization, and assigned Rick Sanchez an IAM role. In a moment we’ll demonstrate all of the pieces coming together.

Before I wrap it up, I want to quickly mention a few issues I ran into with this integration that seemed to resolve themselves without any intervention.

  1. Up to a few nights ago I was unable to get the Provisioning piece working.  I’m not putting it past user error (this is me we’re talking about) but I tried numerous times and failed but was successful a few nights ago.  I also noticed from some recent comments in the Microsoft tutorial people complaining of similar errors.  Maybe something broke for a bit?
  2. The value of the audience attribute in the audienceRestriction section of the SAML assertion generated by Azure AD doesn’t match the identifier within the AWS federation metadata.  Azure AD inserts some garbage looking audience value by default which was causing the assertions to be rejected by AWS.  After setting the identifier to the value of urn:amazon:webservices as referenced in the AWS federation metadata the assertion was consumed without issue.  I saw similar complaints in the Microsoft tutorial so I’m fairly confident this wasn’t just my issue.The story gets a bit stranger.  I wanted to demonstrate the behavior for this series by removing the identifier I had previously added.  Oddly enough the assertion was consumed without issue by AWS.  I verified using Fiddler that the audience value was populated with that garbage entry.  Either way, I would err on the side of caution and would recommend populating the identifier with the entry referenced in the AWS metadata as seen below.7.png

The last thing I want to point out is the Microsoft tutorial states that you are required to create the users in AWS prior to asserting their identity.  This is inaccurate as AWS does not require a user record to be pre-created in AWS.  This is different from a majority (if not all) of the SaaS integrations I’ve done in the past so this surprised me as well.  Either way, it’s not required which is a nice benefit if you’ve ever had to deal with the challenging of managing the identify lifecycle across cloud offerings.

Let’s wrap up this series by having Rick Sanchez log into the AWS Management Console and shutdown an EC2 instance.  Here I have logged into the Windows 10 machine named CLIENT running in Azure.  We navigate to https://myapps.microsoft.com and log into Azure AD as Rick Sanchez.  We then hit the Amazon Web Services icon and are seamless logged into the AWS Management Console.

8.png

Examining the assertion in Fiddler shows  the Role and RoleSessionName claims in the assertion.

9.png

Navigating to the EC2 Dashboard displays the instance I prepared earlier using my primary account.  Rick has full rights over administration of the instance for activities such as starting and starting the instance.  After successfully terminating the instance I log into the AWS Management Console as my primary AWS account and go to CloudTrail and see the log entries recording the activities of Rick Sanchez.

10.png

With that let’s cover some key pieces of information to draw from the series.

  1. The Azure AD and AWS integration differs from most SaaS integrations I’ve done when it comes to user provisioning.  Most of the time a user record must exist prior to the user authenticating.  There are a growing number of SaaS providers provisioning upon successful authentication as provisioning challenges grow to further consumption of cloud services, but they are still few and far between.  AWS does a solid job with eliminating the pain of pre-provisioning users.
  2. The concept of associating roles with specific identity providers is really neat on Amazon’s part.  It allows the customer to manage permissions and associate those permissions with roles in AWS, but delegate the right on a per identity provider basis to assert a specific set of roles.
  3. Microsoft’s definition of provisioning in this integration is pulling a listing of roles from AWS and making them configurable in the Azure Portal.
  4. The AWS API is solid and quite easy to leverage when using the AWS SDKs. I would like to see AWS switch from what seems to be proprietary method of application access to OAuth to become more aligned with the rest of the industry.
  5. Don’t trust vendors to make everything point and click. Take the time to understand what’s going on in the background. In a SAML integration such as this, a quick review of the metadata can save you a lot of headaches when troubleshooting issues.

I learned a ton about AWS over these past few weeks and also got some good deep dive time into Azure AD which I haven’t had time for in a while.  Hopefully you found this series valuable and learned a thing or two yourself.

In my next series I plan on writing a simple application to consume the Cognito service offered by AWS.  For those of you more familiar with the Microsoft side of the fence, it’s similar to Azure AD B2C but with some unique features Microsoft hasn’t put in place yet making a great option to solve those B2C identity woes.

Thanks and have a wonderful holiday!