Integrating Azure AD and G-Suite – Automated Provisioning

Integrating Azure AD and G-Suite – Automated Provisioning

Today I’ll wrap up my series on Azure Active Directory’s (Azure AD) integration with Google’s G-Suite.  In my first entry I covered the single-sign on (SSO) integration and in my second and third posts I gave an overview of Google’s Cloud Platform (GCP) and demonstrated how to access a G-Suite domain’s resources through Google’s APIs.  In this post I’m going to cover how Microsoft provides automated provisioning of user, groups, and contacts .  If you haven’t read through my posts on Google’s API (part 1, part 2) take a read through so you’re more familiar with the concepts I’ll be covering throughout this post.

SSO using SAML or Open ID Connect is a common capability of most every cloud solutions these days.  While that solves the authentication problem, the provisioning of users, groups, and other identity-relates objects remains a challenge largely due to the lack of widely accepted standards (SCIM has a ways to go folks).  Vendors have a variety of workarounds including making LDAP calls back to a traditional on-premises directory (YUCK), supporting uploads of CSV files, or creating and updating identities in its local databases based upon the information contained in a SAML assertion or Open ID Connect id token.  A growing number of vendors are exposing these capabilities via a web-based API.  Google falls into this category and provides a robust selection of APIs to interact with its services from Gmail to resources within Google Cloud Platform, and yes even Google G-Suite.

If you’re a frequent user of Azure AD, you’ll have run into the automatic provisioning capabilities it brings to the table across a wide range of cloud services.  In a previous series I covered its provisioning capabilities with Amazon Web Services.  This is another use case where Microsoft leverages a third party’s robust API to simplify the identity management lifecycle.

In the SSO Quickstart Guide Microsoft provides for G-Suite it erroneously states:

“Google Apps supports auto provisioning, which is by default enabled. There is no action for you in this section. If a user doesn’t already exist in Google Apps Software, a new one is created when you attempt to access Google Apps Software.”

This simply isn’t true.  While auto provisioning via the API can be done, it is a feature you need to code to and isn’t enabled by default.  When you enable SSO to G-Suite and attempt to access it using an assertion containing the claim for a user that does not exist within a G-Suite domain you receive the error below.

google4int1

This establishes what we already knew in that identities representing our users attempting SSO to G-Suite need to be created before the users can authenticate.  Microsoft provides a Quickstart for auto provisioning into G-Suite.  The document does a good job telling you were to click and giving some basic advice but really lacks in the detail into what’s happening in the background and describing how it works.

Let’s take a deeper look shall we?

If you haven’t already, add the Google Apps application from the Azure AD Application Gallery.  Once the application is added navigate to the blade for the application and select the Provisioning page.  Switch the provisioning mode from manual to automatic.

google4int2.png

Right off the bat we see a big blue Authorize button which tells us that Microsoft is not using the service accounts pattern for accessing the Google API.  Google’s recommendation is to use the service account pattern when accessing project-based data rather than user specific data.  The argument can be made that G-Suite data doesn’t fall under project-based data and the service account credential doesn’t make sense.  Additionally using a service account would require granting the account domain-wide delegation for the G-Suite domain allowing the account to impersonate any user in the G-Suite domain.  Not really ideal, especially from an auditing perspective.

By using the Server-side Web Apps pattern a new user in G-Suite can be created and assigned as the “Azure AD account”. The downfall with of this means you’re stuck paying Google $10.00 a month for a non-human account. The price of good security practices I guess.

google4int3.png

Microsoft documentation states that the account must be granted the Super Admin role. I found this surprising since you’re effectively giving the account god rights to your G-Suite domain. It got me wondering what authorization scopes is Microsoft asking for? Let’s break out Fiddler and walk through the process that kicks off after clicking on the Authorization button.

A new window pops up from Google requesting me to authenticate. Here Azure AD, acting as the OAuth client, has made an authorization request and has sent me along with the request over to the Google which is acting as the authorization server to authenticate, consent to the access, and take the next step in the authorization flow.

google4int4

When I switch over to Fiddler I see a number of sessions have been captured.  Opening the WebForms window of the first session to accounts.google.com a number of parameters that were passed to Google.

google4int5

The first parameter gives us the three authorization scopes Azure AD is looking for.  The admin.directory.group and admin.directory.user are scopes are both related to the Google Directory API, which makes sense if it wants to manage users and groups.  The /m8/feeds scope grants it access to manage contacts via the Google Contacts API.  This is an older API that uses XML instead of JSON to exchange information and looks like it has been/is being replaced by the Google People API.

Management of contacts via this API is where the requirement for an account in the Super Admin role originates.  Google documentation states that management of domain shared contacts via the /m8/feeds API requires an administrator username and password for Google Apps.  I couldn’t find any privilege in G-Suite which could be added to a custom Admin role that mentioned contacts.  Given Google’s own documentation along the lack of an obvious privilege option, this may be a hard limitation of G-Suite.  Too bad too because there are options for both Users and Groups.  Either way, the request for this authorization scope drives the requirement for Super Admin for the account Azure AD will be using for delegated access.

The redirect_uri is the where Google sends the user after the authorization request is complete.  The response_type tells us Azure AD and Google are using the OAuth authorization code grant type flow.  The client_id is the unique identifier Google has assigned to Azure AD in whatever project Microsoft has it built in.  The approval_prompt setting of force tells Google to display the consent window and the data Azure AD wants to access.  Lastly, the access_type setting of offline allows Azure AD to access the APIs without the user being available to authenticate via a refresh token which will be issued along with the access token.  Let’s pay attention to that one once the consent screen pops up.

I plug in valid super user credentials to my G-Suite domain and authenticate and receive the warning below.  This indicates that Microsoft has been naughty and hasn’t had their application reviewed by Google.  This was made a requirement back in July of 2017… so yeah… Microsoft maybe get on that?

google4int6.png

To progress to the consent screen I hit the Advanced link in the lower left and opt to continue.  The consent window then pops up.

google4int7.png

Here I see that Microsoft has registered their application with a friendly name of azure.com.  I’m also shown the scopes that the application wants to access which jive with the authorization scopes we saw in Fiddler.  Remember that offline access Microsoft asked for?  See it mentioned anywhere in this consent page that I’m delegating this access to Microsoft perpetually as long as they ask for a refresh token?  This is one of my problems with OAuth and consent windows like this.  It’s entirely too vague as to how long I’m granting the application access to my data or to do things as me.  Expect to see this OAuth consent attacks continue to grow in in use moving forward.  Why worry about compromising the user’s credentials when I can display a vague consent window and have them grant me access directly to their data?  Totally safe.

Hopping back to the window, I click the Allow button and the consent window closes.  Looking back at Fiddler I see that I received back an authorization code and posted it back to the reply_uri designated in the original authorization request.

google4int8.png

Switching back to the browser window for the Azure Portal the screen updates and the Test Connection button becomes available.  Clicking the button initiates a quick check where Azure AD obtains an access token for the scopes it requires unseen to the user.  After the successful test I hit the Save button.

google4int9.png

Switching to the browser window for the Google Admin Portal let’s take a look at the data that’s been updated for the user I used to authorize Microsoft its access.  For that I select the user, go to the Security section and I now see that the Azure Active Directory service is authorized to the contacts, user, and group management scopes.

google4int10.png

Switching back to the browser window for the Azure Portal I see some additional options are now available.

google4int11.png

The mappings are really interesting and will look familiar to you if you’ve ever done anything with an identity management tool like Microsoft Identity Manager (MIM) or even Azure AD Sync.  The user mappings for example show which attributes in Azure AD are used to populate the attributes in G-Suite.

google4int12.png

The attributes that have the Delete button grayed out are required by Google in order to provision new user accounts in a G-Suite domain.  The options available for deletion are additional data beyond what is required that Microsoft can populate on user accounts it provisions into G-Suite.  Selecting the Show advanced options button, allow you to play with the schema Microsoft is using for G-Suite.   What I found interesting about this schema is it doesn’t match the resource representation Google provides for the API.  It would have been nice to match the two to make it more consumable, but they’re probably working off values used in the old Google Provisioning API or they don’t envision many people being nerdy enough to poke around the schema.

Next up I move toggle the provisioning status from Off to On and leave the Scope option set to sync only the assigned users and groups.

google4int13.png

I then hit the Save button to save the new settings and after a minute my initial synchronization is successful.  Now nothing was synchronized, but it shows the credentials correctly allowed Azure AD to hit my G-Suite domain over the appropriate APIs with the appropriate access.

google4int14.png

So an empty synchronization works, how about one with a user?  I created a new user named dutch.schaefer@geekintheweeds.com with only the required attributes of display name and user principal name populated, assigned the new user to the Google Apps application and give Azure AD a night to run another sync.  Earlier tonight I checked the provisioning summary and verified the sync grabbed the new user.

google4int15.png

Review of the audit logs for the Google Apps application shows that the new user was exported around 11PM EST last night.  If you’re curious the synch between Azure AD and G-Suite occurs about every 20 minutes.

google4int16.png

Notice that the FamilyName and GivenName attributes are set to a period.  I never set the first or last name attributes of the user in Azure AD, so both attributes are blank.  If we bounce back to the attribute mapping and look at the attributes for Google Apps, we see that FamilyName and GivenName are both required meaning Azure AD had to populate them with something.  Different schemas, different requirements.

google4int17

Switching over to the Google Admin Console I see that the new user was successfully provisioned into G-Suite.

google4int18.png

Pretty neat overall.  Let’s take a look at what we learned:

  • Azure AD supports single sign-on to G-Suite via SAML using a service provider-initiated flow where Azure AD acts as the identity provider and G-Suite acts as the service provider.
  • A user object with a login id matching the user’s login id in Azure Active Directory must be created in G-Suite before single sign-on will work.
  • Google provides a number of libraries for its API and the Google API Explorer should be used for experimentation with Google’s APIs.
  • Google’s Directory API is used by Azure AD to provision users and groups into a G-Suite domain.
  • Google’s Contacts API is used by Azure AD to provision contacts into a G-Suite domain.
  • A user holding the Super Admin role in the G-Suite domain must be used to authorize Azure AD to perform provisioning activities.  The Super Admin role is required due to the usage of the Google Contact API.
  • Azure AD’s authorization request includes offline access using refresh tokens to request additional access tokens to ensure the sync process can be run on a regular basis without requiring re-authorization.
  • Best practice is to dedicate a user account in your G-Suite domain to Azure AD.
  • Azure AD uses the Server-side Web pattern for accessing Google’s APIs.
  • The provisioning process will populate a period for any attribute that is required in G-Suite but does not have a value in the corresponding attribute in Azure AD.
  • The provisioning process runs a sync every 20 minutes.

Even though my coding is horrendous, I absolutely loved experimenting with the Google API.  It’s easy to realize why APIs are becoming so critical to a good solution.  With the increased usage of a wide variety of products in a business, being able to plug and play applications is a must.  The provisioning aspect Azure AD demonstrates here is a great example of the opportunities provided when critical functionality is exposed for programmatic access.

I hope you enjoyed the series, learned a bit more about both solutions, and got some insight into what’s going on behind the scenes.

 

Integrating Azure AD and AWS – Part 2

Today I will continue the journey into the integration between Azure AD and Amazon Web Services.  In my first entry I covered the reasons why you’d want to integrate Azure AD with AWS and provided a high-level overview of how the solution works.  The remaining entries in this series will cover the steps involved in completing the integration including deep dives into the inner workings of the solution.

Let me start out by talking about the testing environment I’ll be using for this series.

lab.png

The environment includes three virtual machines (VMs) running on Windows Server 2016 Hyper V on a server at my house.  The virtual machines consists three servers running Windows Server 2016 with one server acting as a domain controller for the journeyofthegeek.local Active Directory (AD) forest, another server running Active Directory Federation Services (AD FS) and Azure AD Connect (AADC), and the third server running MS SQL Server and IIS.  The IIS instance hosts a .NET sample federated application published by Microsoft.

In Microsoft Azure I have a single Vnet configured for connectivity to my on-premises lab through a site-to-site IPSec virtual private network (VPN) I’ve setup with pfSense.  Within the Vnet exists a single VM running Windows 10 that is domain-joined to the journeyofthegeek.local AD domain.  The Azure AD tenant providing the identity backend for the Microsoft Azure subscription is synchronized with the journeyofthegeek.local AD domain using Azure AD Connect and is associated with the domain journeyofthegeek.com.  Authentication to the Azure AD tenant is federated using my instance of AD FS.  I’m not synchronizing passwords and am using an alternate login ID with the user principal name being synchronized to Azure AD being stored in the AD attribute msDS-CloudExtensionAttribute1.  The reason I’m still configured to use an alternate login ID was due to some testing I needed to do for a previous project.

I created a single test user in the journeyofthegeek.local Active Directory domain named Rick Sanchez with a user principal name (UPN) of rick.sanchez@journeyofthegeeklocal and msDS-CloudExtensionAttribute1 of  rick.sanchez@journeyofthegeek.com.  The only attribute to note that the user has populated is the mail attribute which has the value of rick.sanchez@journeyofthegeek.com.  The user is being synchronized to Azure AD via the Azure AD Connect instance.

In AWS I have a single elastic compute cloud (EC2) instance running Windows Server 2016 within a virtual private cloud (VPC).  I’ll be configuring Azure AD as an identity provider associated with the AWS account and will be associating an AWS IAM role named AzureADEC2Admins.  The role will grant full admin rights over the management of the EC2 instances associated to the account via the AmazonEC2FullAccess permissions policy.

Let’s begin shall we?

The first step I’ll be taking is to log into the Azure Portal as an account that is a member of the global admins and navigate to the Azure Active Directory blade.  From there I select Enterprise Applications blade and hit the New Application link.

entapps.png

I then search the application gallery for AWS, select the AWS application, accept the default name, and hit the Add button.  Azure AD will proceed to add the application and will then jump to a quick start page.  So what exactly does it mean to add an application to Azure AD?  Good question, for that we’ll want to use the Azure AD cmdlets.  You can reference this link.

Before we jump into running cmdlets, let’s talk very briefly about the concept of application identities in AAD.  If you’ve managed Active Directory Domain Services (AD DS), you’re very familiar with the concept of service accounts.  When you needed an application (let’s call it a non-human to be more in-line with industry terminology) to access AD-integrated resources directly or on-behalf of a user you would create a security principal to represent the non-human.  That security principal could be a user object, managed service account object, or group managed service account object.  You would then grant that security principal rights and permissions over the resource or grant it the right to impersonate a user and access the resource on the user’s behalf.  The part we want to focus in on is the impersonation or delegation to access a resource on the behalf of a user.  In AD DS that delegation is accomplished through the Kerberos protocol.

When we shift over to AAD the same basic concepts still exist of creating a security principal to represent the application and granting that application direct or delegated access to a resource.  The difference is the protocol handling the access shifts from Kerberos to OAuth 2.0.  One thing many people become confused about is thinking that OAuth handles authentication.  It doesn’t.  It has nothing do with authentication and everything to do with authorization, or more clearly delegation.  When we add an application the AAD a service principal object and sometimes application object (in AWS instance both are created) are created in the AAD tenant to represent the application.  I’m going to speak to the service principal object for the AWS integration, but you can read through this link for a good walkthrough on application and service principal objects and how they differ.

Now back to AWS.  So we added the application and we now have a service principal object in our tenant representing AWS.  Here is a few of the attributes for the object pulled via PowerShell.

awsservice.png

Review of the attributes for the object provide a few pieces of interesting information.   We’ll get to experiment more with what these mean when we start doing Fiddler captures, but let’s talk a bit about them now.  The AppRoles attribute provides a single default role of msiam_access.  Later on we’ll be adding additional roles that will map back to our AWS IAM roles.

Next up we have the KeyCredentials which contains two entries.  This attribute took me a while to work out.  In short, based upon the startDate, I think these two entries are referencing the self-signed certificate included in the IdP metadata that is created after the application is added to the directory.  I’ll cover the IdP metadata in the next entry.

The Oauth2Permissions are a bit funky for this use case since we’re not really allowing the application to access AWS on our behalf, but rather asking it to produce a SAML assertion asserting our identity.  Maybe the delegation can be thought of as delegating Azure AD the right to create logical security tokens representing our users that can be used to assert an identity to AWS.

The PasswordCredentials contains a single entry which shares the same KeyID as the KeyCredential.  As best I can figure from reading the documentation is this would normally contain entries for client keys when not using certificate authentication.  Given that it contains a single entry with the same KeyID as the KeyCredential for signing, I can only guess it will contain an entry even with a certificate is used to authenticate the application.

The last attributes of interest are the PreferredTokenSigningKeyThumbprint which references the certificate within the IdP metadata and the replyURLs which is the assertion consumer URI for AWS.

So yeah, that’s what happens in those 2 or 3 seconds the AWS application is registered with Azure AD.  I found it interesting how the service principal object is used to represent trust between Azure AD and AWS and all the configuration information attached to the object after the application is simply added.  It’s nice to have some of the configuration work done for us out of the box, but there is much more to do.

In the next entry I’ll walk through the Quick Start for the AWS application configuration and explore the metadata Azure AD creates.

The journey continues in my third entry.

Experimenting with the Azure B2B API

Hello all!

On October 31st Microsoft announced the public preview of the Azure B2B Invitation API. Prior to the introduction of the API, the only way to leverage the B2B feature was through the GUI. Given that B2B is Microsoft’s solution to collaboration with trusted partners in Office 365, the lack of the capability to programmatically interact with the feature was very limiting. The feature is now exposed through Azure’s Graph API as documented here.

A friend challenged me to write a script or small application that would leverage the API (Yes we are that nerdy.) Over the past year I’ve written a number of PowerShell scripts that query for information from OneDrive and Azure AD so I felt I would challenge myself by writing a small .NET Forms application. Now I have not done anything significant in the programming realm since freshman year of college so I thought this would be painful. Thankfully I have a copy of Visual Studio that comes with my MSDN subscription. All I can say is wow, development solutions have evolved since the 90s.

I won’t bore you with the amount of Googling and reading on MSDN I did over the weekend. Suffice to say it was a lot. Very tedious but an awesome learning experience. I can see a ton of re-use of the code I came up with for future experiments (even though I’m sure any real developer would point and laugh).

Before I jump into the detail, I want to mention how incredibly helpful Fiddler was in getting this all working and getting a deep dive understanding of how this all works. If you’re interesting in learning the magic of how all this “modern” stuff works, a tool like Fiddler is a must.

First off we need to register the application in Azure AD as a Native App to obtain a Client ID per this article. Next we need to grant the application the delegated right to read and write directory data as described in the MS article introducing the API. Once the application is registered and the rights have been granted, we can hammer out the code.

Before I jump into the code and the Fiddler traces, one thing that caught my eye in the returned json object was an attribute named invitedToGroups. If you’re familiar with the Azure B2B functionality, you’ll recall that as part of the B2B provisioning process, you can add the user object to a group. This is very useful in saving you time from having to do this after the user accepts the invitation and their user objects populates in the Azure AD. What’s odd about this is this attribute isn’t documented in the Microsoft blog I linked above. Digging into the github documentation, it looks like the information about it was removed. Either way, I decided to play around it with, but regardless of whether or not I followed the schema mentioned in the removed Github documentation, I couldn’t get it to take. I am fairly sure I got the schema of the attribute/value pair correctly, so we’ll have to chalk this up to MS playing with feature while the functionality is in public preview.

So let’s take a look at the code of this simple Windows form application, shall we?

In this first section of code, I’m simply pulling some information from the input fields within the forms application.

// Pull data from user input fields
string tenantid = tenant.Text;
string emailaddress = email.Text;
string redirectsite = redirect.Text;
string group = GroupID.Text;

In this new section of code, I’m leveraging functions from the ADAL library to redirect the user to authenticate against Azure AD, obtain an authorization code for the graph API, and submit that code for an access token for the Graph API.

// Authenticate user and obtain access token
AuthenticationContext authcontext = new AuthenticationContext("https://login.microsoftonline.com/" + tenantid);
AuthenticationResult token = authcontext.AcquireToken("https://graph.microsoft.com", clientid, redirectURI, PromptBehavior.Always);

In this section of code I build an instance of the httpclient class that will be used to submit my web request. You’ll notice I’m attaching the bearer access token I obtained in the early step.

// Deliver the access token to the B2B endpoint along with the posting the JWT with the invite information
string URL = "https://graph.microsoft.com/beta/";
HttpClient client = new HttpClient();
client.BaseAddress = new Uri(URL);
client.DefaultRequestHeaders.Authorization = new
System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", token.AccessToken);
Client.DefaultRequestHeaders.Accept.Add(new
System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json"));

Here I issue my HTTP request to post to the invitation endpoint and include json object with the necessary information to create an invitation.

HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post,
"invitations");
request.Content = new StringContent("{"invitedUserEmailAddress":"" + emailaddress + "","inviteRedirectUrl":"" + redirectsite + "","sendInvitationMessage":true,"invitedToGroups":[{"group":"" + group + ""}]}", Encoding.UTF8, "application/json");
HttpResponseMessage response = await client.SendAsync(request);

Finally I parse the json object (using the Newtonsoft library) that is returned by the endpoint to check to see whether or not the operation was completed.

string content = await response.Content.ReadAsStringAsync();
Newtonsoft.Json.Linq.JToken json = Newtonsoft.Json.Linq.JObject.Parse(content);
string myresult = json.Value("status");
MessageBox.Show(myresult);

Quite simple right? It had to be for someone as terrible as developing as I am. Imagine the opportunities for an API like this in the hands of a good developer? The API could be leveraged to automate this whole process in an enterprise identity management tool allowing for self-service for company users who need to collaborate with trusted partners. Crazy cool.

This is the stuff I applaud Microsoft for. They’ve take a cumbersome process, leveraged modern authentication and authorization protocols, provided a very solid collection of libraries, and setup a simple to use API that leverages industry standard methodologies and data formats. Given the scale of cloud and the requirement for automation, simple and robust APIs based upon industry standards are hugely important to the success of a public cloud provider.

This whole process was an amazing learning experience where I had an opportunity to do something I’ve never done before and mess with technologies that are very much cutting edge. Opportunities like this to challenge myself and problem solve out of my comfort zone are exactly why I love IT.

I hope the above post has been helpful and I can’t wait to see how this feature pans out when it goes GA!

Feel free to reach out if you’re interested in a copy of my terrible app.

Thanks!