Exploring Azure AD Privileged Identity Management (PIM) – Part 2 – Setup

Exploring Azure AD Privileged Identity Management (PIM) – Part 2 – Setup

Welcome back to part 2 of my series on Azure Active Directory Privileged Identity Management (AAD PIM).  In the first post I covered the basics of the service.  If you haven’t read it yet, take a few minutes to read through it because I’ll be jumping right into using the service going forward.  In this post I’m going to cover the setup process for AAD PIM.

Before you can begin using AAD PIM, you’ll need to purchase a license that includes the capability.  As we saw in my last post, at this time that means a standalone Azure AD Premium P2 or Enterprise Mobility + Security E5 license.  Once the license is registered as being purchased by your tenant, you’ll be able to setup AAD PIM.

Your first step is to log into the Azure Portal.  After you’ve logged in you’ll want to click the Create a Resource button and search for Azure AD Privileged Identity Management.

1pim1.png

Select the search result and AAD PIM application will be displayed with the Create button.  Click the create button to spin the service up for your tenant.

1pim2.png

It will only take a few seconds and you’ll be informed the service has successfully been spun up and you’ll be given the option to add a link to your dashboard.

1pim3.png

The global admin who added AAD PIM to the tenant will become the first member of the Privileged Role Administrator role.  This is a new role that was introduced with the service.  Members of this role are your administrators of AAD PIM and has full read and write access to it.  Beware that other global admins, security administrators, and security readers only have read access to it.  As soon as you successfully spin up the service, you’ll want to add another Privileged Role Administrator as a backup.

Upon opening AAD PIM for the first time, you’ll receive a consent page as seen below.  The consent process requires confirmation of the user’s identity using Azure MFA.  If the user isn’t enabled for it, it will be configured at this point.

1pim4.png

After successfully authenticating with Azure MFA. The screen will update to show the status check was completed as seen below. This is a great example of Microsoft exercising the concept of step-up authentication. The user may have authenticated to the Azure Portal with a password or perhaps a still-valid session cookie. By prompting for an Azure MFA challenge the assurance of the user being the real user is that much higher thus reducing the risk of the user accessing such sensitive configuration options.

1pim5

After clicking the Consent button the service becomes fully usable.  The primary menu options are displayed as seen in the picture below.  For the purpose of this post we’re going to click on the Azure AD directory roles option under the Manage section.

1pim6.png

The Manage section of the menu is refreshed and a number of new options are displayed.  Before I jump into the Wizard, I’ll navigate through each option in the section to explain its purpose.

1pim7.png

The Roles option gives us a view of all of the users who are members of privileged roles within Azure AD  and Office 365.  In the activation column it’s shown as to whether or not the user is a permanent or eligible admin.  The expiration column shows any user that is eligible and has actively requested and been approved for temporary access to the privileged role.  As you can see from my screenshot from my test tenant I have a number of users in the global admin roles which is a big no no.  We’ll remediate that in a bit using the Wizard.

1pim8.png

Selecting the Add user button brings up a new screen where new users can be configured for privileged roles.  Microsoft has done a good job of giving AAD PIM the capability of managing a multitude of Azure AD and Office 365 roles.  Adding users to roles through this tool will make automatically make the user an eligible for the role rather than a permanent member like through other means would.

1pim9.png

The Filter button allows for robust filtering options including the permission state (all, eligible, permanent), activation state (all, active, or inactive), and by role.  The Refresh button’s function is obvious and the group option allows you to group the data either by user or by role.  The Review button allows you to kick off an access review which we’ll cover in a later post.  Lastly we have the Export button which exports the data to a CSV.

The Users option under the Manage section presents the same options as the Roles option except it takes a user-centric view.

The Alerts option under the Manage section displays the alerts referenced here.  You can see it is alerting me to the fact I have too many permanent global admins configured for my tenant.  I also have the option to run a manual scan rather than waiting on the next automatic scan.

1pim10.png

The Access Reviews option under the Manage section is used to create new access review.  I’ll cover the capability in a future post.

Skipping over the Wizard option for a moment, we have the Settings option.  Here we can configure a variety of settings for roles, alerts, and access reviews.

Let’s dig into the settings for roles first.

1pim11.png

Here we can configure the default settings for all roles as well as settings specific to one role.  When a user successfully activates a privileged role, the membership in that role is time bound with a default of one hour.  If after doing some baselining we find one hour is insufficient, we could bump it up to something higher.  We can also configure notifications to notify administrators of activation of a role.  There is also the option to require an incident or request reference that may map back to an internal incident management or request management system.  Azure MFA can be required when a user activates a role.  You’ll want to be aware that the MFA setting is automatically enforced for roles Microsoft views as critical such as global administrator.

Finally we have the option to require an approval.  If you’ve played around with AAD PIM since preview, you may remember the approval workflow.  For some reason the product team removed it when AAD PIM original went general available.  This effectively meant users could elevate their access whenever they wanted.  Sure they weren’t permanent members but there were no checks and balances.  For organizations with a high security posture it made AAD PIM of little value and forced the on-demand management of privileged roles to be done using complicated PowerShell scripts or third-party tools that were integrated with the Graph API.  It’s wonderful to see the product team responded to customer feedback and has added the feature back.

Toggling to Enable for the require approval option adds a section where you can select approvers for requests for the role.

1pim12.png

Moving on to the Alerts settings we have the ability to configure parameters for some of the alerting as can be seen from the examples below.

1pim13.png

The default values for the configurable thresholds around the “There are too many global administrator” should be a good wake up call to organizations as to the risk Microsoft associates with global admin access.  The thresholds for the “Roles are being activated too frequently” should be left as the default until the behavior of your user base is better understood.  This will help you to identify deviations from standard behavior indicating a possible threat as well as to identify opportunities to improve the user experience by bumping up the activation time span for users holding privileged roles that the hour long default activation time is insufficient.

Lastly we have Access Review settings.  Here we can enable or disable mail notifications to reviewers are the beginning and end of an access review.  Reminders can also be sent to reviewers if they have no completed a review they are a part of.  A very welcome feature of requiring reviewers to provide reasons for approvals of a review.  This can be helpful to capture business requirements as to why a user needs continued access to a role.  Finally, the default access review duration can be adjusted.

1pim14

Now on to the Wizard.  The Wizard is a great tool to use when you first configure AAD PIM in order to get it up and running and begin capturing behavioral patterns.  The steps within the Wizard are outlined below.

1pim15.png

The Discover privileged roles step displays a simple summary of the privileged roles in use and the amount of permanent and eligible users.  We can see from the below my tenant has exceeded either the 3 global admins or greater than 10% of users default thresholds for the “There are too many global admins” alert.  Selecting any of the roles displays a listing of the users holding permanent or eligible membership in the role.

1pim16.png

Clicking the next button bring us to the “Convert users to eligible step” where we can begin converting permanent members to eligible members. From a best practices perspective, you should ensure you keep at least two permanent members in the Privileged Role Administrator role to avoid being locked out if one account becomes unavailable. You can see that I’m making Ash Williams and Jason Voorhies eligible members of the global admins group.

1pim17.png

After clicking the Next button I’m moved to the “Review the changes to your users in the privileged roles” step.  I commit the changes by hitting the OK button and my two users are now setup as eligible members of the roles.

1pim18.png

As you’ve seen throughout the post AAD PIM is incredibly easy to configure.  I firmly believe that the only successful security solutions moving forward will be solutions that are simple to use and transparent to the users.  These two traits will allow security professionals to focus less of their time on convoluted solutions and more time working directly with the business to drive real value to the organization.

I’m going to start something new with a quick bulleted list of key learning points that I came across while performing the lab and doing the research for the post.

  • AAD PIM can be configured after the first Azure AD Premium P2 or EMS E5 license is associated with the tenant
  • Be aware that at this time Microsoft does not enforce a technical control to prevent all users from benefiting from PIM but the licensing requirements require an individual license for each user benefitting from the feature.  Make sure you’re compliant with the licensing requirements and don’t build processes around what technical controls exist today. They will change.
  • Once AAD PIM is activated by the first global admin, immediately assign a second user permanent membership in the Privileged Role Administrators role.

That’s it folks.  In the next post in my series I’ll take a look at what the user experience is like for a requestor and approver.  I’ll also look at some Fiddler captures to see I capture any detail as to how/if the modified privileges are reflected in the logical security token.

Thanks!

 

Exploring Azure AD Privileged Identity Management (PIM) – Part 1

Exploring Azure AD Privileged Identity Management (PIM) – Part 1

We’re going to take a break from Azure Information Protection and shift our focus to Azure Active Directory Privileged Identity Management (AAD PIM).

If you’ve ever had to manage an application, you’re familiar with the challenge of trying to keep a balance between security and usability when it comes to privileged access.  In many cases you’re stuck with users that have permanent membership in privileged roles because the impact to usability of the application is far too great to manage that access on an “as needed basis” or as we refer to it in the industry “just in time” (JIT).   If you do manage to remove that permanent membership requirement (often referred to as standing privileged access) you’re typically stuck with a complicated automation solution or a convoluted engineering solution that gives you security but at the cost of usability and increasing operational complexity.

Not long ago the privileged roles within Azure Active Directory (AAD), Office 365 (O365), and Azure Role-Based Access Control had this same problem.  Either a user was a permanent member of the privileged role or you had to string together some type of request workflow that interacted with the Graph API or triggered a PowerShell script.  In my first entry into Azure AD, I had a convoluted manual process which involved requests, approvals, and a centralized password management system.  It worked, but it definitely impacted productivity.

Thankfully Microsoft (MS) has addressed this challenge with the introduction of Azure AD Privileged Identity Management (AAD PIM).  In simple terms AAD PIM introduces the concept of an “eligible” administrator which allows you to achieve that oh so wonderful JIT.  AAD PIM is capable of managing a wide variety of roles which is another area Microsoft has made major improvements on.  Just a few years ago close to everything required being in the Global Admin role which was a security nightmare.

In addition to JIT, AAD PIM also provides a solid level of logging and analytics, a centralized view into what users are members of privileged roles, alerting around the usage of privileged roles, approval workflow capabilities (love this feature), and even provides an access review capability to help with access certification campaigns.  You can interact with AAD PIM through the Azure Portal, Graph API, or PowerShell.

To get JIT you’ll need an Azure Active Directory Premium P2 or Enteprise Mobility and Security E5 license.  Microsoft states that every use that benefits from the feature requires a license.  While this is a licensing requirement, it’s not technically enforced as we see in my upcoming posts.

You’re probably saying, “Well this is all well and good Matt, but there is nothing here I couldn’t glean from Microsoft documentation.”  No worries my friends, we’ll be using this series to walk to demonstrate the capabilities so you can see them in action.  I’ll also be breaking out my favorite tool Fiddler to take a look behind the scenes of how Microsoft manages to elevate access for the user after a privileged role has been activated.

 

The Evolution of AD RMS to Azure Information Protection – Part 7 – Deep Dive into cross Azure AD tenant consumption

The Evolution of AD RMS to Azure Information Protection – Part 7 – Deep Dive into cross Azure AD tenant consumption

Each time I think I’ve covered what I want to for Azure Information Protection (AIP), I think of another fun topic to explore.  In this post I’m going to look at how AIP can be used to share information with users that exist outside your tenant.  We’ll be looking at the scenario where an organization has a requirement to share protected content with another organization that has an Office 365 tenant.

Due to my requirements to test access from a second tenant, I’m going to supplement the lab I’ve been using.  I’m adding to the mix my second Azure AD tenant at journeyofthegeek.com.  Specific configuration items to note are as follows:

  • The tenant’s custom domain of journeyofthegeek.com is an Azure AD (AAD)-managed domain.
  • I’ve created two users for testing.  The first is named Homer Simpson (homer.simpson@journeyofthegeek.com) and the second is Bart Simpson (bart.simpson@journeyofthegeek.com).
  • Each user have been licensed with Office 365 E3 and Enterprise Mobility + Security E5 licenses.
  • Three mail-enabled security groups have been created.  The groups are named The Simpsons (thesimpsons@journeyofthegeek.com), JOG Accounting (jogaccounting@journeyofthegeek.com), and JOG IT (jogit@journeyofthegeek.com).
  • Homer Simpson is a member of The Simpsons and JOG Accounting while Bart Simpson is a member of The Simpsons and JOG IT.
  • Two additional AIP policies have been created in addition to the Global policy.  One policy is named JOG IT and one is named JOG Accounting.
  • The Global AIP policy has an additional label created named PII that enforces protection.  The label is configured to detect at least one occurrence of a US social security number.  The document is protection policy allows only members of the The Simpsons group to the role of viewer.
  • The JOG Accounting and JOG IT AIP policies have both been configured with an additional label of either JOG Accounting or JOG IT.  A sublabel for each label has also been created which enforces protection and restricts members of the relevant departmental group to the role of viewer.
  • I’ve repurposed the GIWCLIENT2 machine and have created two local users named Bart Simpson and Homer Simpson.

Once I had my tenant configuration up and running, I initialized Homer Simpson on GIWCLIENT2.  I already had the AIP Client installed on the machine, so upon first opening Microsoft Word, the same bootstrapping process I described in my previous post occurred for the MSIPC client and the AIP client.  Notice that the document has had the Confidential \ All Employees label applied to the document automatically as was configured in the Global AIP policy.  Notice also the Custom Permissions option which is presented to the user because I’ve enabled the appropriate setting in the relevant AIP policies.

7aip1.png

I’ll be restricting access to the document by allowing users in the geekintheweeds.com organization to hold the Viewer role.  The geekintheweeds.com domain is associated with my other Azure AD tenant that I have been using for the lab for this series of posts.  First thing I do is change the classification label from Confidential \ All Employees to General.  That label is a default label provided by Microsoft which has an RMS Template applied that restricts viewers to users within the tenant.

One interesting finding I discovered through my testing is that the user can go through the process of protecting with custom permissions using a label that has a pre-configured template and the AIP client won’t throw any errors, but the custom permissions won’t be applied.  This makes perfect sense from a security perspective, but it would be nice to inform the user with an error or warning.  I can see this creating unnecessary help desk calls with how it’s configured now.

When I attempt to change my classification label to General, I receive a prompt requiring me to justify the drop in classification.  This is yet another setting I’ve configured in my Global AIP policy.  This seems to be a standard feature in most data classification solutions from what I’ve observed in another major vendor.

7aip2.png

After successfully classifying the document with the General label protection is removed from the document. At this point I can apply my custom permissions as seen below.

7aip3.png

I repeated the process for another protected doc named jog_protected_for_Ash_Williams.docx with permissions restricted to ash.williams@geekintheweeds.com.  I packaged both files into an email and sent them to Ash Williams who is a user in the Geek In The Weeds tenant.  Keep in mind the users in the Geek In The Weeds tenant are synchronized from a Windows Active Directory domain and use federated authentication.

After opening Outlook the message email from Homer Simpson arrives in Ash William’s inbox.   At this point I copied the files to my desktop, closed Outlook, opened Microsoft Word and used the “Reset Settings” options of the AIP client, and signed out of my Office profile.

7aip4

At this point I started Fiddler and opened one of the Microsoft Word document. Microsoft Word pops-up a login prompt where I type in my username of ash.williams@geekintheweeds.com and I’m authenticated to Office 365 through the standard federated authentication flow. The document then pops open.

7aip5.png

Examining the Fiddler capture we see a lot of chatter. Let’s take a look at this in chunks, first addressing the initial calls to the AIP endpoint.

7aip6

If you have previous experience with the MSIPC client in the AD RMS world you’ll recall that it makes its calls in the following order:

  1. Searches HKLM registry hive
  2. Searches HKCU registry hive
  3. Web request to the RMS licensing pipeline for the RMS endpoint listed in the metadata attached to the protected document

In my previous deep dives into AD RMS we observed this behavior in action.  In the AIP world, it looks like the MSIPC client performs similarly.  The endpoint we see it first contacting is the Journey of the Geek which starts with 196d8e.

The client first sends an unauthenticated HTTP GET to the Server endpoint in the licensing pipeline. The response the server gives is a list of available SOAP functions which include GetLicensorCertificate and GetServerInfo as seen below.

7aip8.png

The client follows up the actions below:

  1. Now that the client knows the endpoint supports the GetServerInfo SOAP function, it sends an unauthenticated HTTP POST which includes the SOAP action of GetServerInfo.  The AIP endpoint returns a response which includes the capabilities of the AIP service and the relevant endpoints for certification and the like.
  2. It uses that information received from the previous request to send an unauthenticated HTTP POST which includes the SOAP action of ServiceDiscoveryForUser.  The service returns a 401.

At this point the client needs to obtain a bearer access token to proceed.  This process is actually pretty interesting and warrants a closer look.

7aip9

Let’s step through the conversation:

  1. We first see a connection opened to odc.officeapps.live.com and an unauthenticated HTTP GET to the /odc/emailhrd/getfederationprovider URI with query strings of geekintheweeds.com.  This is a home realm discovery process trying to the provider for the user’s email domain.

    My guess is this is MSAL In action and is allowing support for multiple IdPs like Azure AD, Microsoft Live, Google, and the like.  I’ll be testing this theory in a later post where I test consumption by a Google user.

    The server responds with a number of headers containing information about the token endpoints for Azure AD (since this is domain associated with an Azure AD tenant.)

    7aip10.png

  2. A connection is then opened to odc.officeapps.live.com and an unauthenticated HTTP GET to the /odc/emailhrd/getidp with the email address for my user ash.williams@geekintheweeds.com. The response is interesting in that I would have thought it would return the user’s tenant ID. Instead it returns a JSON response of OrgId.

    7aip11.png

    Since I’m a nosey geek, I decided to unlock the session for editing.  First I put in the email address associated with a Microsoft Live.  Instead of OrgId it returned MSA which indicates it detects it as being a Microsoft Live account.  I then plugged in a @gmail.com account to see if I would get back Google but instead I received back neither.  OrgId seems to indicate that it’s an account associated with an Azure AD tenant.  Maybe it would perform alternative steps depending on whether it’s MSA or Azure AD in future steps?  No clue.

  3. Next, a connection is made to oauth2 endpoint for the journeyofthegeek.com tenant. The machine makes an unathenticated requests an access token for the https://api.aadrm.com/ in order to impersonate Ash Williams. Now if you know your OAuth, you know the user needs to authenticate and approve the access before the access token can be issued. The response from the oauth2 endpoint is a redirect over to the AD FS server so the user can authenticate.

    7aip12.png

  4. After the user successfully authenticates, he is returned a security token and redirected back to login.microsoftonline.com where the assertion is posted and the user is successfully authenticated and is returned an authorization code.

    7aip13.png

  5. The machine then takes that authorization code and posts it to the oauth2 endpoint for my journeyofthegeek.com tenant. It receives back an Open ID Connect id token for ash.williams, a bearer access token, and a refresh token for the Azure RMS API.

    7aip14.png

    Decoding the bearer access token we come across some interesting information.  We can see the audience for the token is the Azure RMS API, the issuer of the token is the tenant id associated with journeyofthegeek.com (interesting right?), and the identity provider for the user is the tenant id for geekintheweeds.com.

    7aip15.png

  6. After the access token is obtained the machine closes out the session with login.microsoftonline.com and of course dumps a bunch of telemetry (can you see the trend here?).

    7aip16.png

  7. A connection is again made to odc.officeapps.live.com and the /odc/emailhrd/getfederationprovider URI with an unauthenticated request which includes a query string of geekintheweeds.com. The same process as before takes place.

Exhausted yet?  Well it’s about to get even more interesting if you’re an RMS nerd like myself.

7aip17.png

Let’s talk through the sessions above.

  1. A connection is opened to the geekintheweeds.com /wmcs/certification/server.asmx AIP endpoint with an unauthenticated HTTP POST and a SOAP action of GetServerInfo.  The endpoint responds as we’ve observed previously with information about the AIP instance including features and endpoints for the various pipelines.
  2. A connection is opened to the geekintheweeds.com /wmcs/oauth2/servicediscovery/servicediscovery.asmx AIP endpoint with an unauthenticated HTTP POST and a SOAP action of ServiceDiscoveryForUser.  We know from the bootstrapping process I covered in my last post, that this action requires authentication, so we see the service return a 401.
  3. A connection is opened to the geekintheweeds.com /wmcs/oauth2/certification/server.asmx AIP endpoint with an unauthenticated HTTP POST and SOAP action of GetLicensorCertificate.  The SLC and its chain is returned to the machine in the response.
  4. A connection is opened to the geekintheweeds.com /wmcs/oauth2/certification/certification.asmx AIP endpoint with an unauthenticated HTTP POST and SOAP action of Certify.  Again, we remember from my last post that this requires authentication, so the service again responds with a 401.

What we learned from the above is the bearer access token the client obtained earlier isn’t attended for the geekintheweeds.com AIP endpoint because we never see it used.  So how will the machine complete its bootstrap process?  Well let’s see.

  1. A connection is opened to the journeyofthegeek.com /wmcs/oauth2/servicediscovery/servicediscovery.asmx AIP endpoint with an unauthenticated HTTP POST and SOAP action of ServiceDiscoveryForUser.  The service returns a 401 after which the client makes the same connection and HTTP POST again, but this time including its bearer access token it retrieved earlier.  The service provides a response with the relevant pipelines for the journeyofthegeek.com AIP instance.
  2. A connection is opened to the journeyofthegeek.com /wmcs/oauth2/certification/server.asmx AIP endpoint with an authenticated (bearer access token) HTTP POST and SOAP action of GetLicensorCertificate.  The service returns the SLC and its chain.
  3. A connection is opened to the journeyofthegeek.com /wmcs/oauth2/certification/certification.asmx AIP endpoint with an authenticated (bearer access token) HTTP POST and SOAP action of Certify.  The service returns a RAC for the ash.williams@geekintheweeds.com along with relevant SLC and chain.  Wait what?  A RAC from the journeyofthegeek.com AIP instance for a user in geekintheweeds.com?   Well folks this is supported through RMS’s support for federation.  Since all Azure AD’s in a given offering (commercial, gov, etc) come pre-federated, this use case is supported.
  4. A connection is opened to the journeyofthegeek.com /wmcs/licensing/server.asmx AIP endpoint with an uauthenticated HTTP POST and SOAP action of GetServerInfo.  We’ve covered this enough to know what’s returned.
  5. A connection is opened to the journeyofthegeek.com /wmcs/licensing/publish.asmx AIP endpoint with an authenticated (bearer access token) HTTP POST and SOAP action of GetClientLicensorandUserCertificates.  The server returns the CLC and EUL to the user.

After this our protected document opens in Microsoft Word.

7aip18.png

Pretty neat right? Smart move by Microsoft to take advantage and build upon of the federated capabilities built into AD RMS. This is another example showing just how far ahead of their game the product team for AD RMS was. Heck, there are SaaS vendors that still don’t support SAML, let alone on-premises products from 10 years ago.

In the next few posts (can you tell I find RMS fascinating yet?) of this series I’ll explore how Microsoft has integrated AIP into OneDrive, SharePoint Online, and Exchange Online.

Have a great week!

The Evolution of AD RMS to Azure Information Protection – Part 4 – Preparation and Server-Side Migration

The Evolution of AD RMS to Azure Information Protection – Part 4 – Preparation and Server-Side Migration

The time has finally come to get our hands dirty.  Welcome to my fourth post on the evolution of Active Directory Rights Management Service (AD RMS) to Azure Information Protection (AIP).  So far in this series I’ve done an overview of the service, a comparison of the architectures, and covered the key planning decisions that need to place when migrating from AD RMS to AIP.  In this post I’ll be performing the preparation and server-side migration steps for the migration from AD RMS to AIP.

Microsoft has done a wonderful job documenting the steps for the migration from AD RMS to AIP within the migration guide.  I’ll be referencing back to the guide as needed throughout the post.  Take a look at my first post for a refresher of my lab setup.  Take note I’ll be migrating LAB2 and will be leaving LAB1 on AD RMS.  Here are some key things to remember about the lab:

  • There is a forest trust between the JOG.LOCAL and GEEKINTHEWEEDS.COM Active Directory Forests
  • AD RMS Trusted User Domains (TUDs) have been configured on both JOG.LOCAL and GEEKINTHEWEEDS.COM

I’ve created the following users, groups, and AD RMS templates (I’ve been on an 80s/90s movies fix, so enjoy the names).

  • GEEKINTHEWEEDS.COM CONFIGURATION
    • (User) Jason Voorhies
      • User Principal Name Attribute: jason.voorhies@geekintheweeds.com
      • Mail Attribute: jason.voorhies@geekintheweeds.com
      • Group Memberships: Domain Users, GIW Employees, Information Technology
    • (User) Theodore Logan
      • User Principal Name Attribute: theodore.logan@geekintheweeds.com
      • Mail Attribute: theodore.logan@geekintheweeds.com
      • Group Memberships: Domain Users, GIW Employees, Information Technology
    • (User) Ash Williams
      • User Principal Name Attribute: jason.voorhies@geekintheweeds.com
      • Mail Attribute: ash.williams@geekintheweeds.com
      • Group Memberships: Domain Users, GIW Employees, Accounting
    • (User) Michael Myers
      • User Principal Name Attribute: michael.myers@geekintheweeds.com
      • Mail Attribute: michael.myers@geekintheweeds.com
      • Group Memberships: Domain Users, GIW Employees, Accounting
    • (Group) Accounting
      • Mail Attribute: giwaccounting@geekintheweeds.com
      • Group Type: Universal Distribution
    • (Group) GIW Employees
      • Mail Attribute: giwemployees@geekintheweeds.com
      • Group Type: Universal Distribution
    • (Group) Information Technology
      • Mail Attribute: giwit@geekintheweeds.com
      • Group Type: Universal Distribution
    • (Group) GIW AIP Users
      • Group Type: Global Security
    • (AD RMS Template) GIW Accounting
      • Users: giwaccounting@geekintheweeds.com
      • Rights: Full Control
    • (AD RMS Template) GIW Employees
      • Users: giwemployees@geekintheweeds.com
      • Rights: View, View Rights
    • (AD RMS Template) GIW IT
      • Users: giwit@geekintheweeds.com
      • Rights: Full Control
  • JOG.LOCAL CONFIGURATION

    • (User) Luke Skywalker
      • User Principal Name Attribute: luke.skywalker@jog.local
      • Mail Attribute: luke.skywalker@jog.local
      • Group Memberships: Domain Users, jogemployees
    • (User) Han Solo
      • User Principal Name Attribute: han.solo@jog.local
      • Mail Attribute: han.solo@jog.local
      • Group Memberships: Domain Users, jogemployees
    • (Group) jogemployees
      • Mail Attribute: jogemployees@jog.local
      • Group Type: Universal Distribution

After my lab was built I performed the following tests:

  • Protected Microsoft Word document named GIW_LS_ADRMS with GEEKINTHEWEEDS.COM AD RMS Cluster and successfully opened with Luke Skywalker user from JOG.LOCAL client machine.
  • Protected Microsoft Word document named GIW_GIWALL_ADRMS with GEEKINTHEWEEDS.COM AD RMS Cluster and GIW Employees template and unsuccessfully opened with Luke Skywalker user from JOG.LOCAL client machine.
  • Protected Microsoft Word document named GIW_JV_ADRMS with GEEKINTHEWEEDS.COM AD RMS Cluster using Theodore Logan user and opened successfully with Jason Voorhies user from GEEKINTHEWEEDS.COM client machine.
  • Protected Microsoft Word document named JOG_MM_ADRMS with JOG.LOCAL AD RMS Cluster using Luke Skywalker user and opened successfully with Michael Myers user from GEEKINTHEWEEDS.COM client machine.
    Protected Microsoft Word document named GIW_ACCT_ADRMS with GEEKINTHEWEEDS.COM AD RMS Cluster and GIW Accounting template and was unsuccessful in opening with Jason Voorhies user from GEEKINTHEWEEDS.COM client machine.

These tests verified both AD RMS clusters were working successfully and the TUD was functioning as expected.  The lab is up and running, so now it’s time to migrate to AIP!

Our first step is to download the AADRM PowerShell module from Microsoft.  I went the easy route and used the install-module cmdlet.

AIP4PIC1.png

Back in March Microsoft announced that AIP would be enabled by default on any eligible tenants with O365 E3 or above that were added after February.  Microsoft’s migration guide specifically instructs you to ensure protection capabilities are disabled if you’re planning a migration from AD RMS to AIP.  This means we need to verify that AIP is disabled.  To do that, we’re going to use the newly downloaded AADRM module to verify the status.

AIP4PIC2.png

As expected, the service is enabled.  We’ll want to disable the service before beginning the migration process by running the Disable-Aadrm cmdlet.  After running the command, we see that the functional state is now reporting as disabled.

AIP4PIC3.png

While we have the configuration data up we’re going to grab the value (minus _wmcs/licensing) in the  LicensingIntranetDistributionPointUrl property.  We’ll be using this later on in the migration process.

In most enterprise scenarios you’d want to perform a staged migration process of your users from AD RMS to AIP.  Microsoft provides for this with the concept of onboarding controls.  Onboarding controls allow you to manage who has the ability to protect content using AIP even when the service has been enabled at the tenant level.  Your common use case would be creating an Azure AD-managed or Windows AD-synced group which is used as your control group.  Users who are members of the group and are licensed appropriately would be able to protect content using AIP.  Other users within the tenant could consume the content but not protect it.

In my lab I’ll be using the GIW AIP Users group that is being synchronized to Azure AD from my Windows AD as the control group.  To use the group I’ll need to get its ObjectID which is the object’s unique identifier in Azure AD.  For that I used the Get-AzureADGroup cmdlet within Azure AD PowerShell module.

AIP4PIC6

Microsoft’s migration guide next suggests come configuration modifications to Windows computers within the forest.  I’m going to hold off on this for now and instead begin the server-side migration steps.

First up we’re going to export the trusted publisher domains (TPDs) from the AD RMS cluster.  We do this to ensure that users that have migrated over to AIP are still able to open content that was previously protected by the AD RMS cluster.  The TPD includes the Server Licensor Certificate (SLC) keys so when exporting them we protect them with a password and create an XML file that includes the SLC keys and any custom AD RMS rights policy templates.

AIP4PIC7.png

Next we import the exported TPD to AIP using the relevant process based upon how we chose to protect the cluster keys.  For this lab I used a software key (stored in the AD RMS database) so I’ll be using the software key to software key migration path.  Thankfully this path is quite simple and consists of running another cmdlet from the AADRM PowerShell module.  In the first command we store the password used to protect the TPD file as a secure string and use the Import-AadrmTpd cmdlet to pull it into AIP.  Notice the resulting data provides the cluster friendly name, indicates the Cryptographic Mode was set to 1, the key was a Microsoft-managed (aka software key) and there were three rights policy templates attached to the TPD.

Keep in mind that if you have multiple TPDs for some reason (let’s say you migrated from Cryptographic Mode 1 to Cryptographic Mode 2) you’ll need to upload each TPD separately and set the active TPD using the Set-AadrmKeyProperties cmdlet.

AIP4PIC8.png

Running a Get-AadrmTemplate shows the default templates Microsoft provides you with as well as the three templates I had configured in AD RMS.

AIP4PIC9

The last step of the server side of the process is to activate AIP.  For that we use the Enable-Aadrm cmdlet from the AADRM PowerShell module.

AIP4PIC10

At this point the server-side configuration steps have been completed and AIP is ready to go.  However, we still have some very important client-side configuration steps to perform.  We’ll cover those steps in my post.

Have a great week!

AWS and Microsoft’s Cloud App Security

AWS and Microsoft’s Cloud App Security

It seems like it’s become a weekly occurrence to have sensitive data exposed due to poorly managed cloud services.  Due to Amazon’s large market share with Amazon Web Services (AWS) many of these instances involve publicly-accessible Simple Storage Service (S3) buckets.  In the last six months alone there were highly publicized incidents with FedEx and Verizon.  While the cloud can be empowering, it can also be very dangerous when there is a lack of governance, visibility, and acceptance of the different security mindset cloud requires.

Organizations that have been in operation for many years have grown to be very reliant on the network boundary acting as the primary security boundary.  As these organizations begin to move to a software defined data center model this traditional boundary quickly becomes less and less effective.  Unfortunately for these organizations this, in combination with a lack of sufficient understanding of cloud, gives rise to mistakes like sensitive data being exposed.

One way in which an organization can protect itself is to leverage technologies such as cloud access security brokers (cloud security gateways if you’re Forrester reader) to help monitor and control data as it travels between on-premises and the cloud.  If you’re unfamiliar with the concept of a CASB, I covered it in a previous entry and included a link to an article which provides a great overview.

Microsoft has its own CASB offering called Microsoft Cloud App Security (CAS).  It’s offered as part of Microsoft’s Enterprise Mobility and Security (EMS) E5/A5 subscription.  Over the past several months multiple connectors to third party software-as-a-service (SaaS) providers have been introduced, including one for AWS.  The capabilities with AWS are limited at this point to pulling administrative logs and user information but it shows promise.

As per usual, Microsoft provides an integration guide which is detailed in button pushing, but not so much in concepts and technical details as to what is happening behind the scenes.  Since the Azure AD and AWS blog series has attracted so many views, I thought it would be fun and informative to do an entry for how Cloud App Security can be used with AWS.

I’m not in the habit of re-creating documentation so I’ll be referencing the Microsoft integration guide throughout the post.

The first thing that needs doing is the creation of a security principal in AWS Identity and Access Management (AWS IAM) that will be used by your tenant’s instance of CAS to connect to resources in your AWS account.   The first four steps are straightforward but step 5 could a bit of an explanation.

awscas1.pngHere we’re creating a custom IAM policy for the security principal granting it a number of permissions within the AWS account.  IAM policies are sets of permissions which are attached to a human or non-human identity or AWS resource and are evaluated when a call to the resource is made.  In the traditional on-premises world, you can think of it as something somewhat similar to a set of NTFS file permissions.  When the policy pictured above is created the security principal is granted a set of permissions across all instances of CloudTrail, CloudWatch, and IAM within the account.

If you’re unfamiliar with AWS services, CloudTrail is a service which audits the API calls made to AWS resources.  Some of the information included in the events include the action taken, the resource the action was taken upon, the security principal that made the action, the date time, and source IP address of the security principal who performed the action.  The CloudWatch service allows for monitoring of metrics and optionally triggering events based upon metrics reaching specific thresholds.  The IAM service is AWS’s identity store for the cloud management layer.

Now that we have a basic understanding of the services, let’s look at the permissions Microsoft is requiring for CAS to do its thing.  The CloudTrail permissions of DescribeTrails, LookupEvents, and GetTrailStatus allow CAS to query for all trails enabled on an AWS account (CloudTrail is enabled by default on all AWS resources), lookup events in a trail, and get information about the trail such as start and stop logging times.  The CloudWatch permissions of Describe* and Get* are fancy ways of asking for  READ permissions on CloudWatch resources.  These permissions include describe-alarms-history, describe alarms, describe-alarms-for-metric, get-dashboard, and get-metric-statistics.  The IAM permissions are similar to what’s being asked for in CloudWatch, basically asking for full read.

Step number 11 instructs us to create a new CloudTrail trail.  AWS by default audits all events across all resources and stores them for 90 days.  Trails enable you to direct events captured by CloudTrail to an S3 bucket for archiving, analysis, and responding to events.

awscas2.png

The trail created is consumed by CAS to read the information captured via CloudTrail.  The permissions requested above become a bit more clear now that we see CAS is requesting read access for all trails across an account for monitoring goodness.  I’m unclear as to why CAS is asking for read for CloudWatch alarms unless it has some integration in that it monitors and reports on alarms configured for an AWS account.  The IAM read permissions are required so it can pull user  information it can use for the User Groups capability.

After the security principal is created and a sample trail is setup, it’s time to configure the connector for CAS.  Steps 12 – 15 walk through the process.  When it is complete AWS now shows as a connected app.

awscas3.png

After a few hours data will start to trickle in from AWS.  Navigating to the Users and Accounts section shows all of the accounts found in the IAM instance for my AWS account.  Envision this as your one stop shop for identifying all of the user accounts across your many cloud services.  A single pane of glass to identity across SaaS.

awscas4.png

On the Activity Log I see all of the API activities captured by CloudTrail.  If I wanted to capture more audit information, I can enable CloudTrail for the relevant resource and point it to the trail I configured for CAS.  I haven’t tested what CAS does with multiple trails, but based upon the permissions we configured when we setup the security principal, it should technically be able to pull from any trail we create.

awscas7.png

Since the CAS and AWS integration is limited to pulling logging information, lets walk through an example of how we could use the data.  Take an example where an organization has a policy that the AWS root user should not be used for administrative activities due to the level of access the account gets by default.  The organization creates AWS IAM users accounts for each of its administrators who administer the cloud management layer.  In this scenario we can create a new policy in CAS to detect and alert on instances where the AWS root user is used.

First we navigate to the Policies page under the Control section of CAS.

awscas6.png

On the Policies page we’re going to choose to create a new policy settings in the image below.  We’ll designate this as a high severity privileged account alert.  We’re interested in knowing anytime the account is used so we choose the Single Activity option.

awscas7

We’ll pretend we were smart users of CAS and let it collect data for a few weeks to get a sampling of the types of events which are captured and to give us some data to analyze.  We also went the extra mile and leveraged the ability of CAS to pull in user information from AWS IAM such that we can choose the appropriate users from the drop-down menus.

Since this is a demo and my AWS lab has a lot of activity by the root account we’re going to limit our alerts to the creation of new AWS IAM users.  To do that we set our filter to look for an Activity type equal to Create user.  Our main goal is to capture usage of the root account so we add another filter rule that searches for a User with the name equal to aws root user where it is the actor in an event.

awscas8.png

Finally we configure the alert to send an email to the administrator when the event occurs.  The governance capabilities don’t come into play in this use case.

awscas9

Next we jump back to AWS and create a new AWS IAM user named testuser1.  A few minutes after the user is created we see the event appearing in CloudTrail.

awscas10

After a few minutes, CAS generates and alert and I receive an email seen in the image below.   I’m given information as to the activity, the app, the date and time it was performed, and the client’s IP address.

awscas11.png

If I bounce back to CAS I see one new Alert.  Navigating to the alert I’m able to dismiss it, adjust the policy that generated it, or resolve it and add some notes to the resolution.

awscas12.png

I also have the option to dig deeper to see some of the patterns of the user’s behavior or the pattern of the behaviors from a specific IP address as seen below.

awscas13.png

awscas14.png

All this information is great, but what can we do with it?  In this example, it delivers visibility into the administrative activities occurring at the AWS cloud management layer by centralizing the data into a single repository which I can then send other data such as O365 activity, Box, SalesForces, etc.  By centralizing the information I can begin doing some behavioral analytics to develop standard patterns of behavior of my user base.  Understanding standard behavior patterns is key to being ahead of the bad guys whether they be insiders or outsiders.  I can search for deviations from standard patterns to detect a threat before it becomes too widespread.  I can also be proactive about putting alerts and enforcement (available for other app connectors in CAS but not AWS at this time) to stop the behavior before the threat is realized.  If I supplemented this data with log information from my on-premises proxy via Cloud App Discovery, I get an even larger sampling improving the quality of the data as well as giving me insight into shadow IT.  Pulling those “shadow” cloud solutions into the light allow me to ensure the usage of the services complies with organizational policies and opens up the opportunity of reducing costs by eliminating redundant services.

Microsoft categorizes the capabilities that help realize these benefits as the Discover and Investigate capabilities of CAS. The solution also offers a growing number of enforcement mechanisms (Microsoft categorized these enforcement mechanisms as Control) which add a whole other layer of power behind the solution.  Due to the limited integration with AWS I can’t demo those capabilities with this post.  I’ll cover those in a future post.

I hope this post helped you better understand the value that CASB/CSGs like Microsoft’s Cloud App Security can bring to the table.  While the product is still very new and a bit sparse on support with 3rd party applications, the list is growing every day. I predict the capabilities provided by technology such as Microsoft’s Cloud App Security will be as standard to IT as a firewall in the years to come.  If you’re already in Office 365 you should be ensuring you integrate these capabilities into your arsenal to understand the value they can bring to your business.

Thanks and have a wonderful week!

Deep Dive into Azure AD Domain Services – Part 2

Deep Dive into Azure AD Domain Services  – Part 2

Welcome back to part 2 of my series on Microsoft’s managed services offering of Azure Active Directory Domain Services (AAD DS).  In my first post I covered so some of the basic configuration settings of the a default service instance.  In this post I’m going to dig a bit deeper and look at network flows, what type of secure tunnels are available for LDAPS, and examine the authentication protocols and supporting cipher suites are configured for the service.

To perform these tests I leveraged a few different tools.  For a port scanner I used Zenmap.  To examine the protocols and cipher suites supported by the LDAPS service I used a custom openssl binary running on an Ubuntu VM in Azure.  For examination of the authentication protocol support I used Samba’s smbclient running on the Ubuntu VM in combination with WinSCP for file transfer, tcpdump for packet capture, and WireShark for packet analysis.

Let’s start off with examining the open ports since it takes the least amount of effort.  To do that I start up Zenmap and set the target to one of the domain controllers (DCs) IP addresses, choose the intense profile (why not?), and hit scan.  Once the scan is complete the results are displayed.

2aad1

Navigating to the Ports / Hosts tab displays the open ports. All but one of them are straight out of the standard required ports you’d see open on a Windows Server functioning as an Active Directory DC.  An opened port 443 deserves more investigation.

2aad2.png

Let’s start with the obvious and attempt to hit the IP over an HTTPS connection but no luck there.

2aad3.png

Let’s break out Fiddler and hit it again.  If we look at the first session where we build the secure tunnel to the website we see some of the details for the certificate being used to secure the session.  Opening the TextView tab of the response shows a Subject of CN=DCaaS Fleet Dc Identity Cert – 0593c62a-e713-4e56-a1be-0ef78f1a2793.  Domain Controller-as-a-Service, I like it Microsoft.  Additionally Fiddler identifies the web platform as the Microsoft HTTP Server API (HTTP.SYS).  Microsoft has been doing a lot more that API since it’s much more lightweight than IIS.  I wanted to take a closer look at the certificate so I opened the website in Dev mode in Chrome and exported it.  The EKUs are normal for a standard use certificate and it’s self-signed and untrusted on my system.  The fact that the certificate is untrusted and Microsoft isn’t rolling it out to domain-joined members tells me whatever service is running on the port isn’t for my consumption.

So what’s running on that port?  I have no idea.  The use of the HTTP Server API and a self-signed certificate with a subject specific to the managed domain service tells me it’s providing access to some type of internal management service Microsoft is using to orchestrate the managed domain controllers.  If anyone has more info on this, I’d love to hear it.

2aad4.png

Let’s now take a look at how Microsoft did at securing LDAPS connectivity to the managed domain.  LDAPS is not enabled by default in the managed domain and needs to be configured through the Azure AD Domain Services blade per these instructions.  Oddly enough Microsoft provides an option to expose LDAPS over the Internet.  Why any sane human being would ever do this, I don’t know but we’ll cover that in a later post.

I wanted to test SSLv3 and up and I didn’t want to spend time manipulating registry entries on a Windows client so I decided to spin up an Ubuntu Server 17.10 VM in Azure.  While the Ubuntu VM was spinning up, I created a certificate to be used for LDAPS using the PowerShell command referenced in the Microsoft article and enabled LDAPS through the Azure AD Domain Services resource in the Azure Portal.  I did not enable LDAPS for the Internet for these initial tests.

After adding the certificate used by LDAPS to the trusted certificate store on the Windows Server, I opened LDP.EXE and tried establishing LDAPS connection over port 636 and we get a successful connection.

2aad5.png

Once I verified the managed domain was now supporting LDAPS connections I switched over to the Ubuntu box via an SSH session.  Ubuntu removed SSLv3 support in the OpenSSL binary that comes pre-packaged with Ubuntu so to test it I needed to build another OpenSSL binary.  Thankfully some kind soul out there on the Interwebz documented how to do exactly that without overwriting the existing version.  Before I could build a new binary I had to add re-install the Make package and add the Gnu Compiler Collection (GCC) package using the two commands below.

  • sudo apt-get install –reinstall make
  • sudo apt-get install gcc

After the two packages were installed I built the new binary using the instructions in the link, tested the command, and validated the binary now includes SSLv3.

2aad6.png

After Poodle hit the news back in 2014, Microsoft along with the rest of the tech industry advised SSLv3 be disabled.  Thankfully this basic well known vulnerability has been covered and SSLv3 is disabled.

2aad7.png

SSLv3 is disabled, but what about TLS 1.0, 1.1, and 1.2?  How about the cipher suites?  Are they aligned with NIST guidance?  To test that I used a tool named TestSSLServer by Thomas Pornin.  It’s a simple command line tool which makes cycling through the available cipher suites quick and easy.

2aad8.png

The options I chose perform the following actions:

  • -all -> Perform an “exhaustive” search across cipher suites
  • -t 1 -> Space out the connections by one second
  • -min tlsv1 -> Start with TLSv1

The command produces the output below.

TLSv1.0:
server selection: enforce server preferences
3f- (key: RSA) ECDHE_RSA_WITH_AES_256_CBC_SHA
3f- (key: RSA) ECDHE_RSA_WITH_AES_128_CBC_SHA
3f- (key: RSA) DHE_RSA_WITH_AES_256_CBC_SHA
3f- (key: RSA) DHE_RSA_WITH_AES_128_CBC_SHA
3– (key: RSA) RSA_WITH_AES_256_CBC_SHA
3– (key: RSA) RSA_WITH_AES_128_CBC_SHA
3– (key: RSA) RSA_WITH_3DES_EDE_CBC_SHA
3– (key: RSA) RSA_WITH_RC4_128_SHA
3– (key: RSA) RSA_WITH_RC4_128_MD5
TLSv1.1: idem
TLSv1.2:
server selection: enforce server preferences
3f- (key: RSA) ECDHE_RSA_WITH_AES_256_CBC_SHA384
3f- (key: RSA) ECDHE_RSA_WITH_AES_128_CBC_SHA256
3f- (key: RSA) ECDHE_RSA_WITH_AES_256_CBC_SHA
3f- (key: RSA) ECDHE_RSA_WITH_AES_128_CBC_SHA
3f- (key: RSA) DHE_RSA_WITH_AES_256_GCM_SHA384
3f- (key: RSA) DHE_RSA_WITH_AES_128_GCM_SHA256
3f- (key: RSA) DHE_RSA_WITH_AES_256_CBC_SHA
3f- (key: RSA) DHE_RSA_WITH_AES_128_CBC_SHA
3– (key: RSA) RSA_WITH_AES_256_GCM_SHA384
3– (key: RSA) RSA_WITH_AES_128_GCM_SHA256
3– (key: RSA) RSA_WITH_AES_256_CBC_SHA256
3– (key: RSA) RSA_WITH_AES_128_CBC_SHA256
3– (key: RSA) RSA_WITH_AES_256_CBC_SHA
3– (key: RSA) RSA_WITH_AES_128_CBC_SHA
3– (key: RSA) RSA_WITH_3DES_EDE_CBC_SHA
3– (key: RSA) RSA_WITH_RC4_128_SHA
3– (key: RSA) RSA_WITH_RC4_128_MD5

As can be seen from the bolded output above, Microsoft is still supporting the RC4 cipher suites in the managed domain. RC4 has been known to be a vulnerable algorithm for years now and it’s disappointing to see it still supported especially since I haven’t seen any options available to disable within the managed domain. While 3DES still has a fair amount of usage, there have been documented vulnerabilities and NIST plans to disallow it for TLS in the near future. While commercial customers may be more willing to deal with the continued use of these algorithms, government entities will not.

Let’s now jump over to Kerberos and check out what cipher suites are supported by the managed DC. For that we pull up ADUC and check the msDS-SupportedEncryptionTypes attribute of the DC’s computer object. The attribute is set to a value of 28, which is the default for Windows Server 2012 R2 DCs. In ADUC we can see that this value translates to support of the following algorithms:

• RC4_HMAC_MD5
• AES128_CTS_HMAC_SHA1
• AES256_CTS_HMAC_SHA1_96

Again we see more support for RC4 which should be a big no no in the year 2018. This is a risk that orgs using AAD DS will need to live with unless Microsoft adds some options to harden the managed DCs.

Last but not least I was curious if Microsoft had support for NTLMv1. By default Windows Server 2012 R2 supports NTLMv1 due to requirements for backwards compatibility. Microsoft has long recommended disabling NTLMv1 due to the documented issues with the security of the protocol. So has Microsoft followed their own advice in the AAD DS environment?

To check this I’m going use Samba’s smbclient package on the Ubuntu VM. I’ll use smbclient to connect to the DC’s share from the Ubuntu box using the NTLM protocol. Samba has enforced the use NTLMV2 in smbclient by default so I needed to make some modifications to the global section of the smb.conf file by adding client ntlmv2 auth = no. This option disables NTLMv2 on smbclient and will force it to use NTLMv1.

2aad9.png

After saving the changes to smb.conf I exit back to the terminal and try opening a connection with smbclient. The options I used do the following:

  • -L -> List the shares on my DC’s IP address
  • -U -> My domain user name
  • -m -> Use the SMB2 protocol

2aad10.png

While I ran the command I also did a packet capture using tcpdump which I moved over to my Windows box using WinSCP.  I then opened the capture with WireShark and navigated to the packet containing the Session Setup Request.  In the parsed capture we don’t see an NTLMv2 Response which means NTLMv1 was used to authenticate to the domain controller indicating NTLMv1 is supported by the managed domain controllers.

2aad11.png

Based upon what I’ve observed from poking around and running these tests I’m fairly confident Microsoft is using a very out-of-the-box configuration for the managed Windows Active Directory domain.  There doesn’t seem to be much of an attempt to harden the domain against some of the older and well known risks.  I don’t anticipate this offering being very appealing to organizations with strong security requirements.  Microsoft should look to offer some hardening options that would be configurable through the Azure Portal.  Those hardening options are going to need to include some type of access to the logs like I mentioned in my last post.  Anyone who has tried to rid their network of insecure cipher suites or older authentication protocols knows the importance of access to the domain controller logs to the success of that type of effort.

My next post will be the final post in this series.  I’ll cover the option Microsoft provides to expose LDAPS to the Internet (WHY OH WHY WOULD YOU DO THAT?), summarize my findings, and mention a few other interesting things I came across during the study for this series.

Thanks!

Integrating Azure AD and G-Suite – Automated Provisioning

Integrating Azure AD and G-Suite – Automated Provisioning

Today I’ll wrap up my series on Azure Active Directory’s (Azure AD) integration with Google’s G-Suite.  In my first entry I covered the single-sign on (SSO) integration and in my second and third posts I gave an overview of Google’s Cloud Platform (GCP) and demonstrated how to access a G-Suite domain’s resources through Google’s APIs.  In this post I’m going to cover how Microsoft provides automated provisioning of user, groups, and contacts .  If you haven’t read through my posts on Google’s API (part 1, part 2) take a read through so you’re more familiar with the concepts I’ll be covering throughout this post.

SSO using SAML or Open ID Connect is a common capability of most every cloud solutions these days.  While that solves the authentication problem, the provisioning of users, groups, and other identity-relates objects remains a challenge largely due to the lack of widely accepted standards (SCIM has a ways to go folks).  Vendors have a variety of workarounds including making LDAP calls back to a traditional on-premises directory (YUCK), supporting uploads of CSV files, or creating and updating identities in its local databases based upon the information contained in a SAML assertion or Open ID Connect id token.  A growing number of vendors are exposing these capabilities via a web-based API.  Google falls into this category and provides a robust selection of APIs to interact with its services from Gmail to resources within Google Cloud Platform, and yes even Google G-Suite.

If you’re a frequent user of Azure AD, you’ll have run into the automatic provisioning capabilities it brings to the table across a wide range of cloud services.  In a previous series I covered its provisioning capabilities with Amazon Web Services.  This is another use case where Microsoft leverages a third party’s robust API to simplify the identity management lifecycle.

In the SSO Quickstart Guide Microsoft provides for G-Suite it erroneously states:

“Google Apps supports auto provisioning, which is by default enabled. There is no action for you in this section. If a user doesn’t already exist in Google Apps Software, a new one is created when you attempt to access Google Apps Software.”

This simply isn’t true.  While auto provisioning via the API can be done, it is a feature you need to code to and isn’t enabled by default.  When you enable SSO to G-Suite and attempt to access it using an assertion containing the claim for a user that does not exist within a G-Suite domain you receive the error below.

google4int1

This establishes what we already knew in that identities representing our users attempting SSO to G-Suite need to be created before the users can authenticate.  Microsoft provides a Quickstart for auto provisioning into G-Suite.  The document does a good job telling you were to click and giving some basic advice but really lacks in the detail into what’s happening in the background and describing how it works.

Let’s take a deeper look shall we?

If you haven’t already, add the Google Apps application from the Azure AD Application Gallery.  Once the application is added navigate to the blade for the application and select the Provisioning page.  Switch the provisioning mode from manual to automatic.

google4int2.png

Right off the bat we see a big blue Authorize button which tells us that Microsoft is not using the service accounts pattern for accessing the Google API.  Google’s recommendation is to use the service account pattern when accessing project-based data rather than user specific data.  The argument can be made that G-Suite data doesn’t fall under project-based data and the service account credential doesn’t make sense.  Additionally using a service account would require granting the account domain-wide delegation for the G-Suite domain allowing the account to impersonate any user in the G-Suite domain.  Not really ideal, especially from an auditing perspective.

By using the Server-side Web Apps pattern a new user in G-Suite can be created and assigned as the “Azure AD account”. The downfall with of this means you’re stuck paying Google $10.00 a month for a non-human account. The price of good security practices I guess.

google4int3.png

Microsoft documentation states that the account must be granted the Super Admin role. I found this surprising since you’re effectively giving the account god rights to your G-Suite domain. It got me wondering what authorization scopes is Microsoft asking for? Let’s break out Fiddler and walk through the process that kicks off after clicking on the Authorization button.

A new window pops up from Google requesting me to authenticate. Here Azure AD, acting as the OAuth client, has made an authorization request and has sent me along with the request over to the Google which is acting as the authorization server to authenticate, consent to the access, and take the next step in the authorization flow.

google4int4

When I switch over to Fiddler I see a number of sessions have been captured.  Opening the WebForms window of the first session to accounts.google.com a number of parameters that were passed to Google.

google4int5

The first parameter gives us the three authorization scopes Azure AD is looking for.  The admin.directory.group and admin.directory.user are scopes are both related to the Google Directory API, which makes sense if it wants to manage users and groups.  The /m8/feeds scope grants it access to manage contacts via the Google Contacts API.  This is an older API that uses XML instead of JSON to exchange information and looks like it has been/is being replaced by the Google People API.

Management of contacts via this API is where the requirement for an account in the Super Admin role originates.  Google documentation states that management of domain shared contacts via the /m8/feeds API requires an administrator username and password for Google Apps.  I couldn’t find any privilege in G-Suite which could be added to a custom Admin role that mentioned contacts.  Given Google’s own documentation along the lack of an obvious privilege option, this may be a hard limitation of G-Suite.  Too bad too because there are options for both Users and Groups.  Either way, the request for this authorization scope drives the requirement for Super Admin for the account Azure AD will be using for delegated access.

The redirect_uri is the where Google sends the user after the authorization request is complete.  The response_type tells us Azure AD and Google are using the OAuth authorization code grant type flow.  The client_id is the unique identifier Google has assigned to Azure AD in whatever project Microsoft has it built in.  The approval_prompt setting of force tells Google to display the consent window and the data Azure AD wants to access.  Lastly, the access_type setting of offline allows Azure AD to access the APIs without the user being available to authenticate via a refresh token which will be issued along with the access token.  Let’s pay attention to that one once the consent screen pops up.

I plug in valid super user credentials to my G-Suite domain and authenticate and receive the warning below.  This indicates that Microsoft has been naughty and hasn’t had their application reviewed by Google.  This was made a requirement back in July of 2017… so yeah… Microsoft maybe get on that?

google4int6.png

To progress to the consent screen I hit the Advanced link in the lower left and opt to continue.  The consent window then pops up.

google4int7.png

Here I see that Microsoft has registered their application with a friendly name of azure.com.  I’m also shown the scopes that the application wants to access which jive with the authorization scopes we saw in Fiddler.  Remember that offline access Microsoft asked for?  See it mentioned anywhere in this consent page that I’m delegating this access to Microsoft perpetually as long as they ask for a refresh token?  This is one of my problems with OAuth and consent windows like this.  It’s entirely too vague as to how long I’m granting the application access to my data or to do things as me.  Expect to see this OAuth consent attacks continue to grow in in use moving forward.  Why worry about compromising the user’s credentials when I can display a vague consent window and have them grant me access directly to their data?  Totally safe.

Hopping back to the window, I click the Allow button and the consent window closes.  Looking back at Fiddler I see that I received back an authorization code and posted it back to the reply_uri designated in the original authorization request.

google4int8.png

Switching back to the browser window for the Azure Portal the screen updates and the Test Connection button becomes available.  Clicking the button initiates a quick check where Azure AD obtains an access token for the scopes it requires unseen to the user.  After the successful test I hit the Save button.

google4int9.png

Switching to the browser window for the Google Admin Portal let’s take a look at the data that’s been updated for the user I used to authorize Microsoft its access.  For that I select the user, go to the Security section and I now see that the Azure Active Directory service is authorized to the contacts, user, and group management scopes.

google4int10.png

Switching back to the browser window for the Azure Portal I see some additional options are now available.

google4int11.png

The mappings are really interesting and will look familiar to you if you’ve ever done anything with an identity management tool like Microsoft Identity Manager (MIM) or even Azure AD Sync.  The user mappings for example show which attributes in Azure AD are used to populate the attributes in G-Suite.

google4int12.png

The attributes that have the Delete button grayed out are required by Google in order to provision new user accounts in a G-Suite domain.  The options available for deletion are additional data beyond what is required that Microsoft can populate on user accounts it provisions into G-Suite.  Selecting the Show advanced options button, allow you to play with the schema Microsoft is using for G-Suite.   What I found interesting about this schema is it doesn’t match the resource representation Google provides for the API.  It would have been nice to match the two to make it more consumable, but they’re probably working off values used in the old Google Provisioning API or they don’t envision many people being nerdy enough to poke around the schema.

Next up I move toggle the provisioning status from Off to On and leave the Scope option set to sync only the assigned users and groups.

google4int13.png

I then hit the Save button to save the new settings and after a minute my initial synchronization is successful.  Now nothing was synchronized, but it shows the credentials correctly allowed Azure AD to hit my G-Suite domain over the appropriate APIs with the appropriate access.

google4int14.png

So an empty synchronization works, how about one with a user?  I created a new user named dutch.schaefer@geekintheweeds.com with only the required attributes of display name and user principal name populated, assigned the new user to the Google Apps application and give Azure AD a night to run another sync.  Earlier tonight I checked the provisioning summary and verified the sync grabbed the new user.

google4int15.png

Review of the audit logs for the Google Apps application shows that the new user was exported around 11PM EST last night.  If you’re curious the synch between Azure AD and G-Suite occurs about every 20 minutes.

google4int16.png

Notice that the FamilyName and GivenName attributes are set to a period.  I never set the first or last name attributes of the user in Azure AD, so both attributes are blank.  If we bounce back to the attribute mapping and look at the attributes for Google Apps, we see that FamilyName and GivenName are both required meaning Azure AD had to populate them with something.  Different schemas, different requirements.

google4int17

Switching over to the Google Admin Console I see that the new user was successfully provisioned into G-Suite.

google4int18.png

Pretty neat overall.  Let’s take a look at what we learned:

  • Azure AD supports single sign-on to G-Suite via SAML using a service provider-initiated flow where Azure AD acts as the identity provider and G-Suite acts as the service provider.
  • A user object with a login id matching the user’s login id in Azure Active Directory must be created in G-Suite before single sign-on will work.
  • Google provides a number of libraries for its API and the Google API Explorer should be used for experimentation with Google’s APIs.
  • Google’s Directory API is used by Azure AD to provision users and groups into a G-Suite domain.
  • Google’s Contacts API is used by Azure AD to provision contacts into a G-Suite domain.
  • A user holding the Super Admin role in the G-Suite domain must be used to authorize Azure AD to perform provisioning activities.  The Super Admin role is required due to the usage of the Google Contact API.
  • Azure AD’s authorization request includes offline access using refresh tokens to request additional access tokens to ensure the sync process can be run on a regular basis without requiring re-authorization.
  • Best practice is to dedicate a user account in your G-Suite domain to Azure AD.
  • Azure AD uses the Server-side Web pattern for accessing Google’s APIs.
  • The provisioning process will populate a period for any attribute that is required in G-Suite but does not have a value in the corresponding attribute in Azure AD.
  • The provisioning process runs a sync every 20 minutes.

Even though my coding is horrendous, I absolutely loved experimenting with the Google API.  It’s easy to realize why APIs are becoming so critical to a good solution.  With the increased usage of a wide variety of products in a business, being able to plug and play applications is a must.  The provisioning aspect Azure AD demonstrates here is a great example of the opportunities provided when critical functionality is exposed for programmatic access.

I hope you enjoyed the series, learned a bit more about both solutions, and got some insight into what’s going on behind the scenes.