Integrating Azure AD and G-Suite – Google API Integration Part 2

Integrating Azure AD and G-Suite – Google API Integration Part 2

Welcome back.

We’ve had an interesting journey so far in this series exploring the Azure Active Directory (Azure AD) and Google G-Suite (G-Suite) integration.  In the first entry I covered how the single sign-on integration works.  The second entry covered explored the process of registering an application with the Google Cloud Platform (GCP) for API access and GCP’s relationship to G-Suite.  Today I demonstrate interaction with Google’s API through the use of some very basic .NET console applications I’ve put together.

As you’ll recall from the second entry, I created a new project with GCP and created a service account identity for my sample application and the Google Admin API has been enabled for the project.  The application has been granted domain-wide delegation rights to my G-Suite domain at geekintheweeds.com.  Finally, the application’s client ID added to my G-Suite domain and has been granted access to the https://www.googleapis.com/auth/admin.directory.user.readonly scope.  The application has an identity, credentials to authenticate, and has been authorized with appropriate access.  Let’s get to some coding (or the mess of garbled logic that accounts for my coding ability 🙂 ).

Google provides API client libraries for a number of languages.  For the purposes of this demonstration, I’ll be using the .NET client library.  Play it smart and leverage the libraries vendors provide for ease of integration as well as avoiding critical mistakes that could impact the security of your API calls.  Now using libraries doesn’t mean you get to ignore what goes on behind the curtains.  It’s critical to understand what the libraries are doing to ensure they’re doing things securely and for the purposes of working out bugs.  APIs in the cloud are changing on a daily basis, don’t count on the libraries keeping up with the vendor pace.  The last thing you want to run into is an situation where your application kicks the bucket due to an API change that isn’t reflected in the library and you’re stuck with a broken production application.

With that lecture out of the way, let’s get into the code for the demo application.

My first step is to open a new project in Visual Studio creating a Visual C# Console App using the .NET Framework.  Before I jump into using Google’s libraries I’m going to do the exact opposite of what I advised you to do above.  Yes folks, I’m going to make a call to the API without using Google’s libraries for the purposes of demonstrating the steps involved in acquiring an access token and sending that access token to the API to retrieve data.  I’ll then demonstrate how much simpler it is to use the library so you’ll appreciate the value of them.

As I explained in my last post, Google APIs use OAuth 2.0 for delegated access to data.  Google does a good job explaining the basic steps in obtaining an access token in this article so I won’t go too deeply into detail.  Instead I’m going to walk through putting these steps into code.  Fair warning, this code is meant for demo purposes only.  This means I’m not going to do any data validation, exception catching, or exercise secure coding practices.  Please please please don’t use my terrible coding practices in any real applications you develop.

Ready?  Let’s begin.

The article above first instructs you to obtain OAuth 2.0 credential which I did in my last entry.  The second step is to obtain an access token from the Google Authorization Server.  I’ll be following the instructions for the OAuth 2.0 for Service Accounts since I’m building a console application for simplicity purposes.

The first thing we need to is collect a few pieces of information.  In the service account scenario we need to know the unique identifier of the service account and the scopes we want to access.  For that I pop open a browser and go to the my project in the console for GCP.  From there I navigate to the APIs & Services section and select the credentials option.

google2int1

On the Credentials page I click on my service account to bring up the relevant information.  The service account field contains the unique identifier (or email address as Google refers to it) of the service account I setup.

google2int2.pngThis identifier is one of the pieces of information I’ll need to include in the JSON Web Token (JWT) I’m going to send to Google’s API to obtain an access token.  In addition to the service account identifier, I also need the scopes I want to access, Google Authorization Server endpoint, G-Suite user I’m impersonating, the expiration time I want for the access token (maximum of one hour), and the time the assertion was issued.

google2int3

Now that I’ve collected the information I need for my claim set, I need to grab the private key from the PKCS#12 (.p12) certificate I was issued when I setup my service account.

google2int4.pngAt this point I have all the components I need to assemble my JWT.  For that I’m going to add the jose-jwt library to my application and leverage it to simplify the creation of the JWT.

google2int5.png

Once the library is added I create the JWT.  Google requires it include a base64 encoded header, base64 encoded claim set, and the signature providing my application’s identity and ensuring the integrity of the JWT.  The signature is creating using the RSA-SHA256 signing algorithm, which is the only algorithm Google supports at this time.

google2int6.png
I quickly debug the app, dump the token variable to a string, copy it into Fiddler’s Text Wizard and Base64 decode it, we see the JWT we’ll be passing to Google.

google2int7.pngI now have my JWT assembled and am ready to deliver to Google for an access token request.  The next step is to submit it to Google’s authorization server.  For that I need to a do a few things.  The first thing I need to is create a new class that will act as the data model for JSON response Google will return to me after I submit my authorization request.  I’ll deserialize the JWT I receive back using this data model.

google2int8.png

Next I create a new task that will accept a base web address, uri, and a JWT.  The task then uses an instance of the HttpClient class to post it to Google’s authorization server and to capture the JSON response.

google2int9I then call the new task, pass the appropriate parameters, deserialize the JSON response using the data model I created earlier, and dump the bearer access token into a string variable.

google2int10So I have my bearer access token that allows me to impersonate the user within my G-Suite domain for the purposes of hitting the Google Directory API. I whip up another task that will deliver the bearer access token to the Google Directory API and the JSON response which will include details about the user.

google2int11Finally I call the task and dump the JSON response to the console.

google2int12.pngThe results look like the below.

google2int13.png

Victory!  Whew a lot of work there to pull a small amount of data from an API.   I had to create models for the data (you’ll notice I got lazy at the end and didn’t create a user model), properly secure the JWT for the access token request, manage my own http clients, and handle the JSON responses.  It ended up being a fair amount of code for a relatively simple process.

How much easier is it using Google’s .NET library?  Let’s take a look at it.

In the code below I create a service account credential which handles the process of creating the appropriate JWT access token.

google2int14.png

Next up I create a new instance of the DirectoryService class which will represent the connection to the Google Directory API.  This class the assembly of the eventual POST to obtain the bearer access token.

google2int15.png

Last but not least I submit a request for user information for marge.simpson@geekinthweeds.com using the instance of the DirectoryService class I created and placing the JSON response into instance of the User class included in the Directory API which will serve as the data model for the response.

google2int16
Just a tad bit easier using the library right? If we break out Fiddler we see that four sessions have been created.

google2int17Session 1 and 2 display the submission of the JWT and acquisition of the bearer token.

google2int18.png

Sessions 3 and 4 display the delivery of the bearer token to the API and the JSON response received back.

google2int19.png

What did we learn from these past two posts (besides the fact I’m a terrible developer)?  We saw programmatic access to G-Suite is very dependent on foundational components of GCP.  Application identities are provisioned in the Google’s IAM Service instance associated with the GCP Project.  The appropriate APIs must be enabled for the GCP project that the application needs to use.  The application must then be granted access to the appropriate authorization scopes within the G-Suite domain and Google provides an option for the application to impersonate users within the G-Suite domain rather than relying upon the user’s consent.

My hope is the biggest lesson you took from these two posts is how valuable it is to understand the inner workings of how a vendor leverages a technology.  Vendor-provided libraries are wonderful and make our lives far easier and our code potentially more simple and secure, but nothing is ever more powerful than understanding the inner workings of the foundational technology.  The knowledge you take at the technology level will translate across all vendors you encounter making grasping that new product all that much easier.  Take your time, dig deep into the weeds, and enjoy the journey into the technology.

See you in the next post where I’ll wrap up this series and break down the Microsoft Azure AD identity provisioning integration with G-Suite.  Have a great week!

 

 

Integrating Azure AD and G-Suite – Google API Integration Part 1

Integrating Azure AD and G-Suite – Google API Integration Part 1

Hi everyone,

Welcome to the second post in my series on the integration between Azure Active Directory (Azure AD) and Google’s G-Suite (formally named Google Apps).  In my first entry I covered the single sign-on (SSO) integration between the two solutions.  This included a brief walkthrough of the configuration and an explanation of how the SAML protocol is used by both solutions to accomplish the SSO user experience.  I encourage you to read through that post before you jump into this.

So we have single sign-on between Azure AD and G-Suite, but do we still need to provision the users and group into G-Suite?  Thanks to Google’s Directory Application Programming Interface (API) and Azure Active Directory’s (Azure AD) integration with it, we can get automatic provisioning into G-Suite .  Before I cover how that integration works, let’s take a deeper look at Google’s Cloud Platform (GCP) and its API.

Like many of the modern APIs out there today, Google’s API is web-based and robust. It was built on Google’s JavaScript Object Notation (JSON)-based API infrastructure and uses Open Authorization 2.0 (OAuth 2.0) to allow for delegated access to an entities resources stored in Google. It’s nice to see vendors like Microsoft and Google leveraging standard protocols for interaction with the APIs unlike some vendors… *cough* Amazon *cough*. Google provides software development kits (SDKs) and shared libraries for a variety of languages.

Let’s take a look at the API Explorer.  The API Explorer is a great way to play around with the API without the need to write any code and to get an idea of the inputs and outputs of specific API calls.  I’m first going to do something very basic and retrieve a listing of users in my G-Suite directory.  Once I access the API explorer I hit the All Versions menu item and select the Admin Directory API.

google1int1

On the next screen I navigate down to the directory.users.list method and select it.  On the screen that follows I’m provided with a variety of input fields.  The data I input into these fields will affect what data is returned to me from the API. I put the domain name associated with my G-Suite subscription and hit the Authorize and Execute button.  A new window pops up which allows me to configure which scope of access I want to grant the API Explorer.  I’m going to give it just the scope of https://www.googleapis.com/auth/admin.directory.user.readonly.

google1int2

I then hit the Authorize and Execute button and I’m prompted to authenticate to Google and delegate API Explorer to access data I have permission to access in my G-Suite description.   Here I plug in the username and password for a standard user who isn’t assigned to any G-Suite Admin Roles.

google1int3

After successfully authenticating, I’m then prompted for consent to delegate API Explorer to view the users configured in the user’s G-Suite directory.

google1int4

I hit the allow button, the request for delegated access is complete, and a listing of users within my G-Suite directory are returned in JSON format.

google1int5

Easy right?  How about we step it up a notch and create a new user.  For that operation I’ll be delegating access to API Explorer using an account which has been granted the G-Suite User Management Admin role.  I navigate back to the main list of methods and choose the directory.users.insert method.  I then plug in the required values and hit the Authorize and Execute button.  The scopes menu pops up and I choose the https://www.googleapis.com/auth/admin.directory.user scope to allow for provisioning of the user and then hit the Authorize and Execute button.  The request is made and a successful response is returned.

google1int6

Navigating back to G-Suite and looking at the listing of users shows the new user Marge Simpson as appearing as created.

google1int7

Now that we’ve seen some simple samples using API Explorer let’s talk a bit about how you go about registering an application to interact with Google’s API as well as covering some basic Google Cloud Platform (GCP) concepts.

First thing I’m going to do is navigate to Google’s Getting Started page and create a new project.  So what is a project?  This took a bit of reading on my part because my prior experience with GCP was non-existent.  Think of a Google Project like an Amazon Web Services (AWS) account or a Microsoft Azure Subscription.  It acts as a container for billing, reporting, and organization of GCP resources.  Projects can be associated with a Google Cloud Organization (similar to how multiple Azure subscriptions can be associated with a single Microsoft Azure Active Directory (Azure AD) tenant) which is a resource available for a G-Suite subscription or Google Cloud Identity resource.  The picture below shows the organization associated with my G-Suite subscription.

google1int8

Now that we have the concepts out of the way, let’s get back to the demo. Back at the Getting Started page, I click the Create a new project button and authenticate as the super admin for my G-Suite subscription. I’ll explain why I’m using a super admin later. On the next screen I name the project JOG-NET-CONSOLE and hit the next button.

google1int9

The next screen prompts me to provide a name which will be displayed to the user when the user is prompted for consent in the instance I decide to use an OAuth flow which requires user consent.

google1int10

Next up I’m prompted to specify what type of application I’m integrated with Google.  For this demonstration I’ll be creating a simple console app, so I’m going to choose Web Browser simply to move forward.  I plug in a random unique value and click Create.

google1int11

After creation is successful, I’m prompted to download the client configuration and provided with my Client ID and Client Secret. The configuration file is in JSON format that provides information about the client’s registration and information on authorization server (Google’s) OAuth endpoints. This file can be consumed directly by the Google API libraries when obtaining credentials if you’re going that route.

google1int12

For the demo application I’m building I’ll be using the service account scenario often used for server-to-server interactions. This scenario leverages the OAuth 2.0 Client Credentials Authorization Grant flow. No user consent is required for this scenario because the intention of the service account scenario is it to access its own data. Google also provides the capability for the service account to be delegated the right to impersonate users within a G-Suite subscription. I’ll be using that capability for this demonstration.

Back to the demo…

Now that my application is registered, I now need to generate credentials I can use for the service account scenario. For that I navigate to the Google API Console. After successfully authenticating, I’ll be brought to the dashboard for the application project I created in the previous steps. On this page I’ll click the Credentials menu item.

google1int13

The credentials screen displays the client IDs associated with the JOG-NET-CONSOLE project.  Here we see the client ID I received in the JSON file as well as a default one Google generated when I created the project.

google1int14

Next up I click the Create Credentials button and select the Service Account key option.  On the Create service account key page I provide a unique name for the service account of Jog Directory Access.

The Role drop down box relates to the new roles that were introduced with Google’s Cloud IAM.

You can think of Google’s Cloud IAM as Google’s version of Amazon Web Services (AWS) IAM  or Microsoft’s Azure Active Directory in how the instance is related to the project which is used to manage the GCP resources.  When a new service account is created a new security principal representing the non-human identity is created in the Google Cloud IAM instance backing the project.

Since my application won’t be interacting with GCP resources, I’ll choose the random role of Logs Viewer.  When I filled in the service account name the service account ID field was automatically populated for me with a value.  The service account ID is unique to the project and represents the security principal for the application.   I choose the option to download the private key as a PKCS12 file because I’ll be using the System.Security.Cryptography.X509Certificates namespace within my application later on.  Finally I click the create button and download the PKCS12 file.

google1int15

The new service account now shows in the credential page.

google1int16

 

Navigating to the IAM & Admin dashboard now shows the application as a security principal within the project.

google1int17

I now need to enable the APIs in my project that I want my applications to access.  For this I navigate to the API & Services dashboard and click the Enable APIs and Services link.

google1int23.png

On the next page I use “admin” as my search term, select the Admin SDK and click the Enable button.  The API is now enabled for applications within the project.

From here I navigate down to the Service accounts page and edit the newly created service account.

google1int19

At this point I’ve created a new project in GCP, created a service account that will represent the demo application, and have given that application the right to impersonate users in my G-Suite directory.  I now need to authorize the application to access the G-Suite’s data via Google’s API.  For that I switch over to the G-Suite Admin Console and authenticate as a super admin and access the Security dashboard.  From there I hit the Advanced Setting option and click the Manage API client access link.

google1int20

On the Manage API client access page I add a new entry using the client ID I pulled previously and granting the application access to the  https://www.googleapis.com/auth/admin.directory.user.readonly scope.  This allows the application to impersonate a user to pull a listing of users from the G-Suite directory.

google1int21

Whew, a lot of new concepts to digest in this entry so I’ll save the review of the application for the next entry.  Here’s a consumable diagram I put together showing the relationship between GCP Projects, G-Suite, and a GCP Organization.  The G-Suite domain acts as a link to the GCP projects.  The G-Suite users can setup GCP projects and have a stub identity (see my first entry LINK) provisioned in the project.  When an service account is created in a project and granted G-Suite Domain-wide Delegation, we use the Client ID associated with the service account to establish an identity for the app in the G-Suite domain which is associated with a scope of authorized access.

google1int22

In this post I covered some basic GCP concepts and saw that the concepts are very similar to both Microsoft and AWS.  I also covered the process to create a service account in GCP and how all the pieces come together to programmatic access to G-Suite resources.  In my next entry I’ll demo some simple .NET applications and walk through the code.

Have a great weekend and go Pats!

Integrating Azure AD and G-Suite – Single Sign-On

Integrating Azure AD and G-Suite – Single Sign-On

Hi everyone,

After working through the Azure Active Directory (AD) and Amazon Web Services (AWS) integration I thought it’d be fun to do the same thing with Google Apps.  Google provides a generic tutorial for single sign-on that is severely lacking in details.  Microsoft again provides a reasonable tutorial for integrating Azure AD and Google Apps for single sign-on.  Neither gives much detail about what goes on behind the scenes or provides the geeky details us technology folk love.  Where there is a lack of detail there is a blogging opportunity for Journey Of The Geek.

In my previous post I covered the benefits of introducing Azure AD as an Identity-as-a-Service (IDaaS) component to Software-as-a-Service (SaaS) integrations.  Read the post for full details but the short of it is the integration gives you value-added features such as multifactor authentication with Azure Multifactor Authentication (MFA), adaptive authentication with Azure AD Identity Protection, contextual authorization with Azure AD Conditional Access, and cloud access security broker (CASB) functionality through Cloud App Security.  Supplementing Google Apps with these additional capabilities improves visibility, security, and user experience.  Wins across the board, right?

I’m going to break the integration into a series of posts with the first focusing on single sign-on (SSO).  I’ll follow up with a post exploring the provisioning capabilities Azure AD introduces as well as playing around with Google’s API.  In a future post I’ll demonstrate what Cloud App Security can bring to the picture.

Let’s move ahead with the post, shall we?

The first thing I did was to add the Google Apps application to Azure AD through the Azure AD blade in the Azure Portal. Once the application was added successfully I navigated to the Single sign-on section of the configuration. Navigate to the SAML Signing Certification section and click the link to download the certificate. This is the certificate Azure AD will be using to sign the SAML assertions it generates for the SAML trust. Save this file because we’ll need it for the next step.

I next signed up for trial subscription of Google’s G Suite Business. This plan comes with a identity store, email, cloud storage, the Google productivity suite, and a variety of other tools and features. Sign up is straightforward so I won’t be covering it. After logging into the Google Admin Console as my newly minted administrator the main menu is displayed. From here I select the Security option.googlesso1

Once the Security page loads, I select the Set up single sign-on (SSO) menu to expand the option.  Google will be playing the role of the service provider, so I’ll be configuring the second section.  Check the box to choose to Setup SSO with third party identity provider.  Next up you’ll need to identify what your specific SAML2 endpoint is for your tenant.  The Microsoft article still references the endpoint used with the old login experience that was recently replaced.  You’ll instead want to use the endpoint https://login.microsoftonline.com/<tenantID>/saml2You’ll populate that endpoint for both the Sign-In and Sign-Out URLs.  I opted to choose the domain specific issuer option which sets the identifier Google identifies itself as in the SAML authentication request to include the domain name associated with the Google Apps account.  You would typically use this if you had multiple subscriptions of Google Apps using the same identity provider.  The final step is upload the certificate you downloaded from Azure AD.  At this point Google configured to redirect users accessing Google Apps (exempting the Admin Console) to Azure AD to authenticate.

googlesso2

Now that Google is configured, we need to finish the configuration on Azure AD’s end.  If you follow the Microsoft tutorial at this point you’re going to run into some issues.  In the previous step I opted to use a domain specific issuer, so I’ll need to set the identifier to google.com/a/geekintheweeds.com.  For the user identifier I’ll leave the default as the user’s user principal name since it will match the user’s identifier in Google.  I also remove the additional attributes Azure AD sends by default since Google will discard them anyway.  Once the settings are configured hit the Save button.

googlesso3

Now that both the IdP and SP have been created, it’s time to create a user in Google App to represent my user that will be coming from Azure AD.  I refer to this as a “stub user” as it is a record that represents my user who lives authoritatively in Azure Active Directory.    For that I switch back to the Google Admin console, click the User’s button, and click the button to create a new user.

googlesso4

Earlier I created a new user in Azure AD named Michael Walsh that has a login ID of michael.walsh@geekintheweeds.com. Since I’ll be passing the user’s user principal name (UPN) from Azure AD, I’ll need to set the user’s Google login name to match the user’s UPN.

googlesso5

I then hit the Create button and my new user is created.  You’ll need that Google assigns the user a temporary password.  Like many SaaS solutions Google maintains a credential associated with the user even when the user is configured to use SSO via SAML.  Our SP and IdP are configured and the stub user is created in Google, so we’re good to test it out.

googlesso6

I open up Edge and navigate to the Google Apps login page, type in my username, and click the Next button.

googlesso7

I’m then redirect to the Microsoft login page where I authenticate using my Azure AD credentials and hit the sign in button.

googlesso8

After successfully authenticating to Azure AD, I’m redirected back to Google and logged in to my newly created account.

googlesso9

So what happened in the background to make the magic happen?  Let’s take a look at a diagram and break down the Fiddler conversation.

googlesso10

The diagram above outlines the simple steps used to achieve the user experience.  First the user navigates to the Google login page (remember SP-initiated SSO), enters his or her username, and is sent back an authentication request seen below extracted from Fiddler with instructs to deliver it back to the Azure AD endpoint for our tenant.

googlesso11

googlesso12.png

The user then authenticates to Azure AD and receives back a SAML response with instructions to deliver it back to Google. The user’s browser posts the SAML assertion to the Google endpoint and the user is successfully authenticated to Google.

googlesso13.png

googlesso14.png

Simple right?  In comparison to the AWS integration from an SSO-perspective, this was much more straightforward.  Unlike the AWS integration, it is required to have a stub user for the user in Google Apps prior to using SSO.  This means there is some provisioning work to perform… or does it?  Azure AD’s integration again offers some degree of “provisioning”.  In my next post I’ll explore those capabilities and perform some simple actions inside Google’s API.

See you next post!

Office 365 Groups Naming Policies – Part 1

Office 365 Groups Naming Policies – Part 1

Groups…  It’s a term every business user consuming technology has heard at some point in time.  Most users only experience groups when they’re unable to access a specific application or file and the coworker sitting next to them informs them they need to call IT and get added to the department group.  Those of us who work on the technology side of the fence are very familiar with the benefits groups bring to the table when controlling access to data.  We are also quite familiar with the challenges they can bring when managing them at scale.

Something as simple as a lack of an enforced naming convention can create serious pain for an organization if it relies heavily upon the naming convention to determine the function and owner of a group.  The pain bleeds through IT and into the business as workers struggle with long wait times for on-boarding new employees due to IT trying to determine which groups the users need to be in.  When it comes time to perform an access review, business owners may waste valuable time trying to determine if removing an employee from a specific group will impact that employee’s ability to fulfill their job responsibilities.

In the on-premises world organizations deal with the challenge of naming conventions in different ways.  Most rely upon first or second level help desk to create groups according to the organization’s naming standard.  This method introduces the risk of human error and presents challenges when the group information for a particular application is sourced from a variety of different identity backends which force the staff to learning multiple tools.  Others make use of identity management (IDM) systems that automate the creation of groups and enforce the naming convention.  This method is very effective but also very costly due to high costs in implementing and operating an IDM.  A very small minority of organizations have evolved to the point where the naming conventions are no longer important due to robust reporting systems and entitlement databases.

Very few organizations are able to successfully execute the third method, which leaves them with the first or second.  The introduction of the software-as-a-service (SaaS) has made the first and second methods of enforcing a naming convention much more complicated.  Using the first method of leveraging help desk staff to create the groups manually is no longer scalable and the second method of using a centralized IDM system is often limited by the vendor’s ability to write connectors to the wide variety of APIs in use across the thousands of SaaS vendors.  All is not lost, as it seems some vendors have begun to recognize the challenge this can introduce to their customers.

If your organization is a consumer of Office 365, you’ve more than likely begun to use Office 365 Groups.  Office 365 groups offer a variety of features not found in the traditional security/distribution group or shared mailbox.  Take a look at this link for a comparison chart that documents the features.  One important thing to note is Office 365 Groups can only be only created in Azure Active Directory (AAD).  You cannot synchronize an on-premises Active Directory Domain Services security or distribution group to AAD and convert it to an Office 365 Group.  This means you can’t leverage an existing solution for enforcing naming conventions unless that solution has a connector into Azure AD.  Given features Office 365 provide and that they are the construct used by Microsoft Teams, you may make the decision to allow your users to create Office 365 Groups on the fly in order to allow them to take full advantage of collaboration tools available in Office 365.  To quote Peter Venkman, “Human sacrifice, dogs and cats living together… mass hysteria!”.

Calm down my friend.  Microsoft has a solution coming in the pipeline that will solve your Office 365 Groups naming convention woes.  In my next post I’ll demonstrate the feature and walkthrough how to test the feature out while it is in preview.

Deep dive into AD FS and MS WAP – User Certificate Authentication through a WAP

Hi everyone,

Today I continue my series of posts that cover a behind the scenes look at how Active Directory Federation Service (AD FS) and the Microsoft Web Application Proxy (WAP) interact.  In my first post  I explained the business cases that would call for the usage of a WAP.  In my second post I did a deep dive into the WAP registration process (MS refers to this as the trust establishment with AD FS and the WAP).  In this post I decided to cover how user certificate authentication is achieved when AD FS server is placed behind the WAP.

AD FS offers a few different options to authenticate users to the service including Integrated Windows Authentication (IWA), forms-based authentication, and certificate authentication.  Readers who work in environments with sensitive data where assurance of a user’s identity is important should be familiar with certificate authentication in the Microsoft world.  If you’re unfamiliar with it I recommend you take a read through this Microsoft article.

With the recent release of the National Institute of Standards and Technology (NIST) Digital Identity Guidelines 800-63 which reworks the authenticator assurance levels (AAL) and relegates passwords to AAL1 only, organizations will be looking for other authenticator options.  Given the maturity of authenticators that make use of certificates such as the traditional smart card it’s likely many organizations will look at opportunities for how the existing equipment and infrastructure can be further utilized.  So all the more important we understand how AD FS certificate authentication works.

I’ll be using the lab I described in my first post.  I made the following modifications/additions to the lab:

  • Configure Active Directory Certificate Services (AD CS) certificate authority (CA) to include certificate revocation list (CRL) distribution point (CDP).  The CRLs will be served up via an IIS instance with the address crl.journeyofthegeek.com.  This is the only CDP listed in the certificates.  Certificates created during my original lab setup that are installed within the infrastructure do not include a CDP.
  • Added a non-domain-joined Windows 10 computer which be used as the endpoint the test user accesses the federation service from.

Tool-wise I used ProcMon, Fiddler, API Monitor, and WireShark.

So what did I discover?

Prior to doing any type of user interaction, I setup the tools I would be using moving forward.  On the WAP I started ProcMon as an Administrator and configured my filters to capture only TCP Send and TCP Receive operations.  I also setup WireShark using a filter of ip.addr==192.168.100.10 && tcp.port==80.  The IP address is the IP of the web server hosting my CRLs.  This would ensure I’d see the name of the process making the connection to the CDP as well as the conversation between the two nodes.

pic1

** Note that the machine will cache the CRLs after they are successfully downloaded from the CDP.  It will not make any further calls until the CRLs expire.  To get around this behavior while I was testing I ran the command certutil -setreg chain\ChainCacheResyncFiletime @now as outlined in this article.   This forces the machine to pull the CRLs again from the CDP regardless of whether or not they are expired.  I ran the command as the LOCAL SYSTEM security principal using psexec.

The final step was to start Fiddler as the NETWORK SERVICE security principal using the command psexec -i -u “NT AUTHORITY\Network Service” “C:\Program Files (x86)\Fiddler2\Fiddler.exe”.  Remember that Fiddler needs the public key certificate in the appropriate file location as I outlined in my last post.  Recall that the Web Application Proxy Service and the Active Directory Federation Service running on the WAP both run as that security principal.

Once all the tools were in place I logged into the non-domain joined Windows 10 box and opened up Microsoft Edge and popped the username of my test user into the username field.

pic2.png

After home realm discovery occurred within Azure AD, I received the forms-based login page of my AD FS instance.

 

pic3.png

Let’s take a look at what’s happened on the WAP so far.

In the initial HTTP Connect session the WAP makes to the AD FS farm, we see that the ClientHello handshake occurs where the WAP authenticates to the AD FS server to authenticate itself as described in my last post.

pic4.png

Once the secure session is established the WAP passes the HTTP GET request to the AD FS server.  It adds a number of headers to the request which AD FS consumes to identify the client is coming from the WAP.  This information is used for a number of AD FS features such as enforcing additional authentication policies for Extranet access.

pic5.png

The WAP also passes a number of query strings.  There are a few interesting query strings here.  The first is the client-request-id which is a unique identifier for the session that AD FS uses to correlate event log errors with the session.  The username is obvious and shows the user’s user principal name that was inputted in the username field at the O365 login page.  The wa query string shows a value of wsignin1.0 indicating the usage of WS-Federation.  The wtrealm indicates the relying party identifier of the application, in this case Azure AD.

pic6

The wctx query string is quite interesting and needs to be parsed a bit on its own.  Breaking down the value in the parameter we come across three unique parameters.

LoginOptions=3 indicates that the user has not selected the “Keep me signed in” option.  If the user had selected that checkbox a value of 1 would have been passed and AD FS would create a persistent cookie which would exist even after the browser closes.  This option is sometimes preferable for customers when opening documents from SharePoint Online so the user does not have to authenticate over and over.

The estsredirect contains the encoded and signed authentication request from O365.  I stared at API monitor for a few hours going API call by API call trying to identify what this looks like once it’s decoded, but was unsuccessful.  If you know how to decode it, I’d love to know.  I’m very curious as to its contents.

The WAP next makes another HTTP GET to the AD FS server this time including the additional query string of pullStatus which is set equal to 0.  I’m clueless as to the function on of this, I couldn’t find anything.  The only other thing that changes is the referer.

My best guess on the above two sessions is the first session is where AD FS performs home realm discovery and maybe some processing on to determine if there are any special configurations for the WAP such as limited or expanded authentication options (device authN, certAuthN only).  The second session is simply the AD FS server presenting the authentication methods configured for Extranet users.

The user then chooses the “Sign in with an X.509 certificate” (I’m not using SNI to host both forms and cert authN on the same port) and the WAP then performs another HTTP CONNECT to port 49443 which is the certificate authentication endpoint on the AD FS server.  It again authenticates to the AD FS server with its client certificate prior to establishing the secure tunnel.

The third session we see a HTTP POST to the AD FS server with the same query parameters as our previous request but also providing a JSON object with a key of AuthMethod and the key value combination of AuthMethod=CertificateAuthentication in the body.

pic7

The next session is another HTTP POST with the same JSON object content and the key value pairs of AuthMethod=CertificateAuthentication and RetrieveCertificate=1 in the body.  The AD FS server sends a 307 Temporary Redirect to the /adfs/backendproxytls/ endpoint on the AD FS server.

Prior to the redirect completing successful we see the calls to the CDP endpoint for the full and delta CRLs.

pic8.png

pic9

I was curious as to which process was pulling the CRLs and identified it was LSASS.EXE from the ProcMon capture.

pic10

At the /adfs/backendproxytls/ endpoint the WAP performs another HTTP POST this time posting a JSON object with a number of key value combinations.

pic11.png

The interesting key value types included in the JSON object are the nested JSON object for Headers which contains all the WAP headers I covered earlier.  The query string JSON object which contains all the query strings I covered earlier.  The SeralizedClientCertificate contains the certificate the user provided after selecting to use certificate authentication.  The AD FS server then sends back a cookie to the WAP.  This cookie is the cookie the representing the user’s authentication to the AD FS server as detailed in this link.

pic12.png

The WAP then performs a final HTTP GET back at the /adfs/ls/ endpoint including the previously described headers and query strings as well as provided the cookie it just received.  The AD FS server responds by providing the assertion requested by Microsoft along with a MSISAuthenticated, MSISSignOut, and MSISLoopDetectionCookie cookies which are described in the link above.

What did we learn?

  1. The certificate is checked at both the WAP and the AD FS server to ensure it is valid and issued from a trusted certificate authority.  Remember to verify you trust the certificate chain of any user certificates on both the AD FS servers and WAPs.
  2. CRL Revocation checking is enabled by default and is performed on both the AD FS server and the WAP.  Remember to verify the locations in your CDP are available by both devices.
  3. The AD FS servers use the LSALogonUser function in the secur32.dll library to perform standard certificate authentication to Active Directory Domain Services.  I didn’t include this, but I captured this by running API monitor on the AD FS server.

In short, if you’re going to use device authentication or user certificate authentication make sure you have your PKI components in order.

See you next post!

Deep dive into AD FS and MS WAP – WAP Registration

Hi everyone,

In today’s blog entry I’ll be doing a deep dive into how the Microsoft Web Application Proxy (WAP) established a trust with the Active Directory Federation Service (AD FS) (I’ll be referring to this as registration) in order to act as a reverse proxy for AD FS.  In my first entry into this series I covered the business use cases that would call for such an integration as well as providing an overview of the lab environment I’ll be using for the series.  So what does registration mean?  Well, the best way to describe it is to see it in action.

Figuring out how to capture the conversation took some trial and error.  This is where Sysinternals Process Explorer comes into play.  I went through the process of registering the WAP with AD FS using the Remote Access Management Console configuration utility and monitored the running processes with Process Explorer.  Upon reviewing the TCP/IP activity of the Remote Access Management Console process (RAMgmtUI.exe) I observed TCP connectivity to the AD FS farm.

RemoteReg

The process is running as the logged in user, in my case the administrator account I’ve configured.  This meant I would need to run Fiddler using the logged in user context rather than having to do some funky with running it as SYSTEM or another security principal using PSEXEC.

I started up Fiddler and configured it to intercept HTTPS traffic as per the configuration below.  Ensure that you’ve trusted the Fiddler root certificate so Fiddler can establish a man-in-the-middle (MITM) scenario.

fiddlerconfig.png

I next ran the Remote Access Management Console and initiated the Web Application Proxy Configuration wizard.   Here I ran the wizard a few different times specifying invalid credentials on the AD FS server to generate some web requests.  The web conversation below popped up Fiddler.

failedlog.png

Digging into the third session shows an HTTP POST to sts.journeyofthegeek.com/adfs/Proxy/EstablishTrust with a return code of 401 Unauthorized which we would expect given our application doesn’t know if authentication is required yet and didn’t specify an Authorization header.

estab1

Session four shows another HTTP POST to the same URL this time with an Authorization header specifying Basic authentication with our credentials Base64 encoded.  We receive another 401 because we have invalid credentials which again is expected.

succlog.png

What’s interesting is the JSON object being posted to the URL.  The JWT includes a key named SerializedTrustCertificate with a value of a Base64 encoded public-key certificate as the value.

json.png

Copy and pasting the encoded value to notepad and saving the file with a CER extension yields the certificate below of which the WAP has both the public and private key pairs.  The certificate is a 2048-bit key length self-signed certificate.

cert.png

At this point the WAP will attempt numerous connections to the /adfs/Proxy/GetConfiguration URL with a query string of api-version=2 as seen in the screenshot below.  It will receive a 401 back because Fiddler needs a copy of the client certificate to provide to the AD FS server.  At this point I let it time out and eventually the setup finished.

getconfig.png

So what does the configuration information look like from AD FS when it’s successfully retrieved?  So to see that we have to now pay attention to the Microsoft.IdentityServer.ProxyService.exe process which runs as the Active Directory Federation Services service (adfssrv).

adfservice.png

Since the process runs as Network Service I needed to get a bit creative in how I captured the conversation with Fiddler.  The first step is to export the public-key certificate for the self-signed certificate generated by the WAP, name it ClientCertificate.cer, and to store it in the Network Service profile folder in C:\Windows\ServiceProfiles\NetworkService\Documents\Fiddler2.   By doing this Fiddler will use that certificate for any website requiring client certificate authentication.

The next step was to start Fiddler as the Network Service security principal.  To do this I used PSEXEC with the following options:

Psexec -i -u “NT AUTHORITY\Network Service” “C:\Program Files (x86)\Fiddler2\Fiddler.exe.

I then restarted the Active Directory Federation Service on the WAP and boom there are our successful GET from the AD FS server at the /adfs/Proxy/GetConfiguration URL.

getconfigsuc.png

The WAP receives back a JSON object with all the configuration information for the AD FS server as seen below.  Much of this is information about endpoints the AD FS server is supporting.  Beyond that we get information the AD FS service configuration.  The WAP uses this configuration to setup its bindings with the HTTP.SYS kernel mode driver.  Yes the WAP uses HTTP.SYS in the same way AD FS uses it.

config1.png

config2.png

So what did we learn?  When establishing the trust with the AD FS server (I’m branding this registration 🙂 ) the WAP does the following:

  1. Generates a 2048-bit self-signed certificate
  2. Opens an HTTPS connection with an AD FS server
  3. Performs a POST on /adfs/Proxy/EstablishTrust providing a JSON object containing the public key certificate and authenticating to the AD FS server with the credentials provided with the wizard using Basic authentication.If the authentication is successful the AD FS server establishes the trust.  (I’ll dig into this piece in the next post)
  4. Performs a GET on /adfs/Proxy/GetConfiguration using the self-signed certificate to authenticate itself to the AD FS server.
  5. Consumes the configuration information and configures the appropriate endpoints with calls to HTTP.SYS.

So that’s the WAP side of the fence for establishing the trust.  In my next post I’ll briefly cover what goes on with the AD FS server as well as examining the LDAP calls (if any) to AD DS during the registration process.

See you next time!

A taste of the cloud with Symantec.cloud Email Security

Recently, I had a chance to demo three of Symantec’s cloud services: Symantec.cloud Email Security, Email Encryption, and Endpoint Security . Since there do not seem to be too many detailed reviews of Symantec’s cloud services floating around on the Net, I figured I relay my experiences with the service.

Today I will be talking about Symantec.cloud Email Security.

Symantec.cloud Email Security

Host based anti-malware is all well and good, but stopping malware from ever entering network is even better. That is where the Email Security portion of the Symantec’s cloud services comes in. The service aims to provide anti-spam, anti-virus, image control, and content control of email through a single cloud-based service managed using a web-based client portal.

Setup was fairly easy, it involved filing out some paperwork with information about the network and submitting it to Symantec to aid in the configuration of the client portal. After about two weeks, we were provided with an email with further instructions of how to configure the mail servers. The service requires directing the mail server to send all mail to Symantec through a TLS-encrypted connection, adjusting MX records to point to Symantec’s servers, and optionally locking down SMTP ports to allow traffic to and from only Symantec servers. Performing all three of these tasks has the added benefit of decreasing the attack surface of the network by locking down SMTP and providing additional confidentiality of email data as it flows between the company’s network and Symantec’s mail servers (I’ll talk more about this when I review the Encryption portion of the service). Symantec support replied quickly to any problems we ran into during the setup.

Once everything was in working order, we were provided access to the web-based portal. (click on images and select expand to full size)

The summary window provides graphical representations and statistics showing total incoming and outgoing email, emails containing viruses, spam, blocked images, and blocked content. It is pretty typical of the summary windows you encounter in any type of enterprise level security software, nothing too special. Although it is neat to watch the amount of spam rise and fall depending on the day of the week (it seems even spammers enjoy taking the weekend off.)

The anti-virus piece of the service utilizes Symantec’s vast database of virus signatures to detect and remove viruses from email. The added benefit of this is better detection of zero-day threats since you are not sitting waiting for the DATs to be released and pushed to your local gateway anti-malware solution. There is not much configuration to this portion of the service.

The anti-spam portion of the service uses typical anti-spam lists, as well as heuristics and a signature system. From our testing, the lists catch close to 50% of the spam using the lists and the heuristics and signature system each catching about 25%. We were really impressed with the effectiveness of the filter. Almost no spam made it through and we had a false positive only about 1 out of every 10,000 emails.

The configuration options are pretty typical of any anti-spam solution. Email can be sent to a quarantine hosted by Symantec, sent along through with an appended header, blocked, or sent to a bulk email address. The quarantine feature was pretty nice, as each user can be setup with access to his or her quarantined emails to restore any false positives. Another available option is to give a single user control over the quarantines of multiple users. The only downfall to this option is control of the quarantines cannot be shared among users. Hopefully that is a features Symantec will add in the future.

The content control feature of the service is really intense as can you can see from the screenshot below. I won’t go too in depth into it because I didn’t spend too much time playing with it. Suffice to say it is pretty awesome. Want to know if your users are sending personally identifiable information out over email without encrypting it? That can be done, even so far as scanning Microsoft Office and OCR’d PDFs. Suspect a certain users of sending company info out of the network without permission? Setup a rule to copy his or her email with specific attachment types to another email address for further review.

The system uses templates to detect patterns and the templates can be created by the user or with help by Symantec . We had Symantec help us create a template to detect bank account numbers and tested it by sending some Excel documents through the system with fictitious account numbers. The system caught the emails with the pattern and notified us of the email address used to send the email. These caught emails can be let through, tagged, logged, deleted, redirected to the administrator, or copied to the administrator. I can see this coming in handy when trying to prevent a data breach of confidential information or during an internal investigation of an employee’s email activities.

I didn’t play with the image control feature of the service. It looks to be as intense as the content control from the little that I looked at.

If you are in need of a customized report from the system, you can request Symantec to create one for you. I didn’t end up utilizing this feature, so I can’t say what the response time from support is.

Overall, I was really impressed with the service. The spam filter was amazing and the anti-virus seemed solid. I can think of a thousand usages for the content control and can’t wait to play with it further. Cost isn’t too bad, about $4,000 / year for 50 users. If you are looking to free up some server resources, consolidate software packages, and possibly increase your network security, Symantec’s cloud Email Security service is something to look at.