Office 365 Group Naming Policies – Part 2

Office 365 Group Naming Policies – Part 2

Welcome back.

 

In my first post I covered some of the methods organizations use to enforce a naming standard for groups, such as Active Directory groups, that are used to authorize access to data.  I also covered the challenges that are introduced when a mechanism for enforcing the naming standard doesn’t exist or isn’t effective and how this problem is becoming more prevalent with the increase in consumption of software-as-a-service applications.

Office 365 Groups are a core foundational component of Office 365 helping to enable simple, fast, and efficient collaboration within an organization.  For an organization to take full advantage of this, its end users need to be empowered to spin up Office 365 groups for mail-based collaboration or create new Teams for real-time collaboration. To make this work, IT can’t get in the way of the business and needs to let the business spin up and down Office 365 Groups as it needs them.  Microsoft has introduced a number of solutions to help with this including group expiration, integration with Office 365 retention policies, and the feature I’ll cover today, naming policies.

The naming policy feature is still in private preview but today I’m going to show you how to test the feature in your own tenant.  As a point of reference, I’m using a set of trial O365 E5 and Azure AD Premium P2 licenses within the commercial offering of Office 365 for my testing.  I can’t speak to whether or not the instructions below will work for Office 365 GCC or Office 365 Government.

The first thing I needed to do was install the Azure AD Preview module.  To do this I had to first remove the existing Azure AD module I had installed on my system and then install the Azure AD Preview module as seen below.

o365-1

Comparing the modules using a get-command shows that the preview module has the new cmdlets below.

Add-AzureADAdministrativeUnitMember
Add-AzureADApplicationPolicy
Add-AzureADMSLifecyclePolicyGroup
Add-AzureADScopedRoleMembership
Add-AzureADServicePrincipalPolicy
Get-AzureADAdministrativeUnit
Get-AzureADAdministrativeUnitMember
Get-AzureADApplicationPolicy
Get-AzureADDirectorySetting
Get-AzureADDirectorySettingTemplate
Get-AzureADMSDeletedDirectoryObject
Get-AzureADMSDeletedGroup
Get-AzureADMSGroup
Get-AzureADMSGroupLifecyclePolicy
Get-AzureADMSLifecyclePolicyGroup
Get-AzureADObjectSetting
Get-AzureADPolicy
Get-AzureADPolicyAppliedObject
Get-AzureADScopedRoleMembership
Get-AzureADServicePrincipalPolicy
New-AzureADAdministrativeUnit
New-AzureADDirectorySetting
New-AzureADMSGroup
New-AzureADMSGroupLifecyclePolicy
New-AzureADObjectSetting
New-AzureADPolicy
Remove-AzureADAdministrativeUnit
Remove-AzureADAdministrativeUnitMember
Remove-AzureADApplicationPolicy
Remove-AzureADDirectorySetting
Remove-AzureADMSDeletedDirectoryObject
Remove-AzureADMSGroup
Remove-AzureADMSGroupLifecyclePolicy
Remove-AzureADMSLifecyclePolicyGroup
Remove-AzureADObjectSetting
Remove-AzureADPolicy
Remove-AzureADScopedRoleMembership
Remove-AzureADServicePrincipalPolicy
Reset-AzureADMSLifeCycleGroup
Restore-AzureADMSDeletedDirectoryObject
Set-AzureADAdministrativeUnit
Set-AzureADDirectorySetting
Set-AzureADMSGroup
Set-AzureADMSGroupLifecyclePolicy
Set-AzureADObjectSetting
Set-AzureADPolicy

The cmdlets we’re interested in for this demonstration are the used to create and manage a new Graph API resource type called a directorySetting. The resource type is used to configure settings within Azure Active Directory. The directorySetting resource types are created from a template of configuration settings called a directorySettingTemplate resource type. Running the cmdlet Get-AzureADDirectorySettingTemplate displays the available to build a custom directorySetting from.

o365-2.png

After connection to Azure AD using the Connect-AzureAD cmdlet, I can take a look at the templates available. The template I’m interested in for this blog is the Group.Unified template because it contains the settings for the naming policy as seen below.

o365-3.png

Now that I’ve identified the template I want to draw from for a new directorySetting, I’m going to create a variable named $template and assign the Group.Unified template to it.  Running a quick Get-Member on the newly assigned displays a method named CreateDirectorySetting.  I’ll use this method to create a new instance of a directorySetting resource type based off the template and assign it to a variable named $setting.

o365-4.png

If I run a Get-Member on $setting I can see that I’ve created a new instance of the directorySetting resource type which has the settings inherited from the Group.Unified template with some of those settings being configured with default values.

o365-5.png

You’ll want to pay attention to these default values because once the settings become active for the tenant and seem to override settings configured within the GUI.  For example, if you are denying users the ability to create new Office 365 groups via the configuration setting in the Azure Active Directory blade in the Azure Portal, leaving the EnableGroupCreation setting as true will override that.  I’m not sure that is the intended behavior, but hey this is still preview right?

The next step is to configure the PrefixSuffixNamingRequirement setting with the naming convention I want enforced across my tenant.  This Microsoft article does a good job explaining your options and the syntax.  I went with a simple naming convention of including the fixed string “JOG” along with the value from the user’s department attribute in Azure Active Directory followed by the string value the user chooses for the group name.

o365-6.png

Checking the values property of the $setting shows that the PrefixSuffixNamingRequirement is now populated with the value I entered above.

o365-7.png

Now that the settings has been configured I make it active by using the New-AzureADDirectorySetting cmdlet and including the $setting directory object as input.

o365-8.png

I then log into the Office 365 portal as a standard user and navigate to Outlook Web App and attempt to create a new Office 365 group. All new groups are now created using the naming convention I defined and it’s displayed clearly to the end users.

o365-9.png

Hopefully Microsoft will refine the documentation as the feature moves out of preview and into general availability.  I also think this is a simple and static setting that would make sense to configurable from the GUI.  I’d also like to see the settings configurable with the directorySetting resource type be in sync with any corresponding settings in the GUI to avoid confusion.

That’s all there is to it.  Overall it’s a very simple yet elegant solution that solves naming convention woes while giving the business freedom to collaborate without having to go through IT.  You can’t beat that.

Thanks!

Office 365 Groups Naming Policies – Part 1

Office 365 Groups Naming Policies – Part 1

Groups…  It’s a term every business user consuming technology has heard at some point in time.  Most users only experience groups when they’re unable to access a specific application or file and the coworker sitting next to them informs them they need to call IT and get added to the department group.  Those of us who work on the technology side of the fence are very familiar with the benefits groups bring to the table when controlling access to data.  We are also quite familiar with the challenges they can bring when managing them at scale.

Something as simple as a lack of an enforced naming convention can create serious pain for an organization if it relies heavily upon the naming convention to determine the function and owner of a group.  The pain bleeds through IT and into the business as workers struggle with long wait times for on-boarding new employees due to IT trying to determine which groups the users need to be in.  When it comes time to perform an access review, business owners may waste valuable time trying to determine if removing an employee from a specific group will impact that employee’s ability to fulfill their job responsibilities.

In the on-premises world organizations deal with the challenge of naming conventions in different ways.  Most rely upon first or second level help desk to create groups according to the organization’s naming standard.  This method introduces the risk of human error and presents challenges when the group information for a particular application is sourced from a variety of different identity backends which force the staff to learning multiple tools.  Others make use of identity management (IDM) systems that automate the creation of groups and enforce the naming convention.  This method is very effective but also very costly due to high costs in implementing and operating an IDM.  A very small minority of organizations have evolved to the point where the naming conventions are no longer important due to robust reporting systems and entitlement databases.

Very few organizations are able to successfully execute the third method, which leaves them with the first or second.  The introduction of the software-as-a-service (SaaS) has made the first and second methods of enforcing a naming convention much more complicated.  Using the first method of leveraging help desk staff to create the groups manually is no longer scalable and the second method of using a centralized IDM system is often limited by the vendor’s ability to write connectors to the wide variety of APIs in use across the thousands of SaaS vendors.  All is not lost, as it seems some vendors have begun to recognize the challenge this can introduce to their customers.

If your organization is a consumer of Office 365, you’ve more than likely begun to use Office 365 Groups.  Office 365 groups offer a variety of features not found in the traditional security/distribution group or shared mailbox.  Take a look at this link for a comparison chart that documents the features.  One important thing to note is Office 365 Groups can only be only created in Azure Active Directory (AAD).  You cannot synchronize an on-premises Active Directory Domain Services security or distribution group to AAD and convert it to an Office 365 Group.  This means you can’t leverage an existing solution for enforcing naming conventions unless that solution has a connector into Azure AD.  Given features Office 365 provide and that they are the construct used by Microsoft Teams, you may make the decision to allow your users to create Office 365 Groups on the fly in order to allow them to take full advantage of collaboration tools available in Office 365.  To quote Peter Venkman, “Human sacrifice, dogs and cats living together… mass hysteria!”.

Calm down my friend.  Microsoft has a solution coming in the pipeline that will solve your Office 365 Groups naming convention woes.  In my next post I’ll demonstrate the feature and walkthrough how to test the feature out while it is in preview.

pfsense + squid + Kerberos

pfsense + squid + Kerberos

Hi everyone,

I hope all of you had an enjoyable holiday. I spent my week off from work spending time with the family and catching up on some reading. One area I decided to spend some time reading up on is Microsoft’s Cloud App Security. For those unfamiliar with the solution, it’s Microsoft’s entry into the cloud access security broker (CASB) (or Cloud Security Gateway (CSG) if you’re a Forrester reader) market. If you haven’t heard of “CASB” or “CSG”, don’t worry too much. While the terminology is new, many of the collection of technologies encompassing a typical CASB or CSG are not new, simply used together in new and creative ways. For a quick intro, take a read through this article and follow up with some Forrester and Gartner research for a deeper dive.

Since I haven’t had much experience with a product specifically marketing itself as a CASB, I thought it would be a great opportunity to play around with Microsoft’s solution. A good first step for any organization to grasp the value of a CASB is to explore what’s happening within the organization outside the view of IT, or as the marketers love to call it, shadow IT. The ease of consuming cloud technologies such as software as a service (SaaS) applications has been both a blessing and a curse. The new technology has been wonderful in cutting IT costs, bringing the technology closer to the business, providing for shorter time to market for new features, and providing simpler integration paths for different applications and services. On the negative side, the ease of use of these solutions means an average employee is using far more of them than is officially sanctioned by IT. This can lead to issues like loss of critical data, non-compliance with policy, or multiple business groups within an organization subscribing to the same service resulting in redundant licensing costs.

Wouldn’t it be great to get visibility into that shadow IT? Since a majority of cloud solutions work over standard HTTP(S) the services are readily accessible to the user without the user having to request additional ports be opened on the firewall. This means it’s much more challenging to track who is using what and what they’re doing with those services. Many organizations attempt to control these types of solutions with a traditional forward web proxy. However, too much focus is put on blocking the “bad” sites instead of analyzing the overall patterns of usage of services. Microsoft’s Azure AD Cloud Discovery is a feature of Azure Active Directory that can be used in conjunction with Cloud App Security’s catalog of app to provide visibility into what’s being accessed as well as providing information as to the risks the services being accessed present to the organization.

To simulate a typical medium to large organization and get some good testing done with Cloud App Discovery, I’m going to add a forward web proxy to my home lab. As I’ve mentioned in previous blog entries I have a small form factor computer running pfsense which I use as my lab networking security appliance. Out of the box, it supports a base install of Squid which can be added and configured to act as a forward web proxy with minimal effort. It gets a bit more challenging when you want to add authentication to the proxy because the built-in options for the pfsense implementation are limited to local, LDAP, and RADIUS authentication. I want authentication so I can identify users connecting to the proxy and associate the web connections with specific users but I want to use Kerberos so I get that seamless single sign on experience.

Like many open source products, the documentation on how to setup Squid running on pfsense and using Kerberos authentication is pretty terrible. Searching the all-powerful Google presents lots of forum posts with people asking how to do it, pieces of answers that don’t make much sense, and some Wikis on how to configure Squid to use Kerberos on a standard server. Given the lack of good documentation, I thought it would be fun to work my way through it and compile a walkthrough. I’m issuing the standard disclaimer that this is intended for lab purposes only. If you’re trying to deploy pfsense and Squid in a production environment, do more reading and spend time doing it safely and securely.

I won’t be covering the basic setup of pfsense as there are plenty of guides out there and the process is simple for anyone with any experience in the network appliance realm. For this demonstration I’ll be running a box with pfsense 2.4.2 installed.

On to the walkthrough!

The first step in the process is to add the Squid package through the pfsense package manager UI.

squid1

On the Package Manager screen, select the Available Packages section and install the Squid package.  After the installation is complete, you’ll see Squid shown in the Installed Packages section.

squid2

Notice the package installed is a branch of the 3.5 release while the latest release available directly via Squid is 4.0.  It’s always fun to have the latest and greatest, but pfsense is an all-in-one solution so it comes with some sacrifices.  Let’s get some of the basic configuration settings done with.  Go to the Services menu, select the Squid Proxy Server menu item, and go the General section.  First up choose the interface you want Squid to be available for and specify a port for it to listen on.

squid3.png

Now check off the Allow Users on Interface unless you have a reason to limit it to certain subnets attached to the interface.  Additionally I’d recommend checking the Resolve DNS IPv4 First option.  I banged my head against the wall with a ton of issues with Squid when I turned on authentication and this option wasn’t set.  You can thank me for saving you hours of Google and trying other options.

squid4

Setup basic logging with the settings below.

squid5.png

Basic settings are complete and it’s a good time to test the proxy from a client machine to verify its base functionality.  You can do this by directing one of your client machines to use the proxy and attempting to access a website.

After you have verified functionality you’ll need to add support for SSH to the pfsense box since we’ll need to make some changes via the command shell.  For that you’ll want to navigate the System menu, select the Advanced menu item, and go to the Admin Access section.  Scroll down from there to the Secure Shell section and click the checkbox for Enable Secure Shell and set a the SSH port to the port of your choice.  I chose 50,000.

squid6

The Secure Shell Server is active, but the firewall blocks access to it across all interfaces by default.  You now need to create the appropriate firewall rule to allow access from devices behind the interface you wish to use to SSH to the box.  For me this is the interface that my lab devices connect to.  For this you’ll select Firewall from the top menu, select the Rules menu item, and select the appropriate interface from the menu items.  Once there, click the Add button to create a firewall rule allowing devices within the subnet to hit the router interface over the port you configured earlier as seen below.  The SSH listener will now be running and will be accessible from the designated interface.

squid7.png

Now you must configure DNS such that the pfsense box can resolve the Active Directory DNS namespace to perform Kerberos related activities.  You can go the easy route and make the Active Directory domain controller the primary DNS server for pfsense via the GUI.  However, I use pfsense as the primary DNS resolver for the lab environment and forward queries to Google’s DNS servers at 8.8.8.8.

In order to continue using with my preferred configuration, I needed to take a few additional steps.  First I needed to add a Domain Override to the DNS Resolver service on pfsense to ensure it doesn’t pass the query along to the external DNS server.  I did this by selecting Services from the main menu, selecting the DNS Resolver menu item, and going to the General Settings section.  I then scrolled down to the Domain Overrides section and added the appropriate override for my Active Directory DNS namespace as seen below.  Take note that you can’t go modifying the resolv.conf as you would in a normal Linux distro since pfsense will scrub any changes you make to the file each time it restarts its services.  Get used to this behavior, we’re going to see it a number of times through this blog entry and we’ll have to learn to work around that limitation (feature?).

squid8.png

Next up you’ll want to verify name resolution is working as intended and it can be tested by running a query from the pfsense box.  Go to the Diagnostics on the main menu, select the DNS Lookup item, and type in the hostname representing the Active Directory DNS namespace.  It should resolve to the entries representing domain controllers in your Active Directory domain.  Successful testing makes the DNS configuration complete.

squid9.png

On to the guts of the configuration.  Pfsense comes with the krb5 package installed so all you need to do is configure it.  For that you are going to need to access the command shell.  Open up your favorite SSH client and connect to the pfsense box as an administrative user.  Upon successful login you’ll see the menu below.

squid10.pngYou want to hit the command shell so choose option 8 and you will be dropped into the shell.

squid11.png

The first step is to configure the krb5 package to integrate with the Active Directory domain.  For that you’ll need to create a krb5.conf file.  Create a new a krb5.conf file in the /etc/ directory and populate it with the appropriate information.  I’ve included the content of my krb5.conf file as an example.

[libdefaults]
default_realm = JOURNEYOFTHEGEEK.LOCAL
dns_lookup_realm = false
dns_lookup_kdc = true
default_tgs_enctypes = aes128-cts-hmac-sha1-96
default_tkt_enctypes = aes128-cts-hmac-sha1-96
permitted_enctypes = aes128-cts-hmac-sha1-96

[realms]
JOURNEYOFTHEGEEK.LOCAL = {
kdc = jog-dc.journeyofthegeek.local
}

[domain_realm]
.journeyofthegeek.local = JOURNEYOFTHEGEEK.LOCAL
journeyofthegeek.local = JOURNEYOFTHEGEEK.LOCAL

[logging]
kdc = FILE:/var/log/kdc.log
Default = FILE:/var/log/krb5lib.log

Check out the MIT documentation on the options available to you in the krb5.conf. I made the choice to limit the encryption algorithms to AES128 for simplicity purposes, feel free to use something else if you wish. Once the settings are populated the file can be saved.

It’s time to test the Kerberos configuration.  You do that by using running kinit and authenticating as a valid user in the Active Directory domain.  If the configuration is correct klist will display the a valid Kerberos ticket granting ticket (TGT) for the user.

squid12.png

The system is now configured to interact with the Active Directory domain using Kerberos.  You now need to create a security principal in Active Directory to represent the Squid service.  Create a new user in Active Directory and name it whatever you wish, I used svc_squid for this lab.  Since I chose to use AES128, I had to select the account control option on the user account in Active Directory Users and Computers (ADUC) that the service supported AES128.  You can ignore that step if you chose not to force an encryption level.  Now a service principal name (SPN) for the service is needed to identify the service when a user attempts to authenticate to it.  For that you’ll need to open an elevated command prompt and use the setspn command.

squid13.png

Wonderful you have a security principal created and it includes the appropriate identifier.  You now need to create a keytab that the service can use to authenticate to Active Directory.  In comes ktpass.  From the same elevated command prompt run the command as seen below.

squid14.png

Pay attention to case sensitivity because it matters when we’re talking MIT Kerberos, which is Kerberos implementation pfsense is using.  The link I included above will explain the options.  I set the crypto option to AES128 to ensure the keytab aligns with the other options I’ve configured around encryption.

Next up you need to transfer the keytab to the pfsense box.  I used WinSCP to transfer the keytab to the pfsense box to the /usr/local/etc/squid/ directory.   The keytab is on the pfsense box but you need to tell Squid where the keytab is.  In a typical Squid implementation you’d define variable in the Squid startup script which would be consumed by the authentication helper.  However, this is another case where pfsense will overwrite any changes you make to the startup script.

In addition to being unable to modify the startup script to set, pfsense also overwrites any changes you make directly to the squid.conf file.  To get around this you’ll need to add the configuration options to the config file through the pfsense GUI.  From within the GUI go to the Services section of the main menu, select the Squid Proxy Server menu item, go to the General section, scroll down and hit the Advanced Options button and scroll to the Advanced Features section.  In the Custom Options (Before Auth) field, you’ll want to add the lines below.

squid15.png

The first four lines I’ve added here are called directives in Squid.  The first directive instructs Squid to use the negotiate_kerberos_auth authentication helper.  The options I’ve added to the helper set a few different configuration options for the helper.  The -k option allows me to direct Squid to the keytab file I added to the server which I couldn’t do with a variable in the startup script.  The -d option writes debug information for the helper to Squid cache.log and the -t option shuts off the replay cache for MIT Kerberos.   The second directive sets the child authentication processes to 1,000.  You’ll want to do some research on this directive if you’re moving this into a production environment.  I simply choose 1000 so I wouldn’t run any risk of getting my authentication requests queued for the purposes of this lab.  The third directive is set to on by default and should only be set to off if you run into issues with PUT/POST requests.

The fourth directive starts enforcing access controls within Squid.  Access controls within Squid are a bit weird.  The Squid wiki does a decent job of explaining how they work.  The short of what I’ve done in the fourth directive is create an access list called auth which will contain all users who successfully authenticate against Squid.  The next line denies users access to the http_access list if the user doesn’t below to the auth access line (blocking non-authenticated users).  The final line allows users who are in the auth list into the http_access list (allows authenticated users).

With that last amount of configuration, you’ve gotten pfsense and Squid configured for Kerberos authentication.  I’ll quickly demonstrate the what a successful implementation looks like.  For that I’m going to bounce over to a Windows 10 domain-joined machine with Chrome installed and configured to use the proxy server.  Navigating to Amazon displays the webpage with no authentication prompts and running a klist from a command prompt shows I have a Kerberos ticket for the proxy.

squid16.png

Going back into the pfsense GUI, going to the Services menu, selecting the Squid Proxy Server menu item and navigating to the Real Time section shows the access log displaying Rick Sanchez accessing Amazon and successful consumption of the Kerberos ticket in the Cache Log section.

squid17.png

In a future post I’ll dig a bit deeper into Azure AD Cloud Discovery and setup automatic forwarding of logs using the Microsoft collector.

Have a happy New Year!

Integrating Azure AD and AWS – Part 4

Integrating Azure AD and AWS – Part 4

We’ve reached the end of the road for my series on integrating Azure Active Directory (Azure AD) and Amazon Web Services (AWS) for single sign-on and role management. In part 1 I walked through the many reasons the integration is worth looking at if your organization is consuming both clouds. In part 2 I described the lab I used to for this series, described the different way application identities (service accounts for those of you in the Microsoft space) are handled in Active Directory Domain Services versus Azure AD, and walked through what a typical application identity looks like in Azure AD. In part 3 I walked through a portion of the configuration steps, did a deep dive into the Azure AD and AWS federation metadata, examined a SAML assertion, and configured the AWS end of the federated trust through the AWS Management Console. This included creation of an identity provider representing the Azure AD tenant and creation of a new IAM role for users within the Azure AD tenant to assert.

In this final post I’ll cover the remainder of the configuration, describe the “provisioning” capabilities of Azure AD in this integration, and pointing out some of the issues with the recommended steps in the Microsoft tutorial.

Before I continue with the configuration, let me cover what I’ve done so far.

  • Part 2
    • Added the AWS application from the Azure AD Application Gallery through the Azure Portal.
  • Part 3
    • Assigned an Azure Active Directory user to the application through the Azure Portal.
    • Configured the Azure AD to pass the Role and RoleSessionName claims through the Azure Portal.
    • Created the SAML identity provider representing Azure AD in the AWS Management Console.
    • Created an AWS IAM Role and associated it with the identity provider representing Azure AD in the AWS Management Console.

At this point JoG users can assert their identity to their heart’s content but we don’t have a list of what AWS IAM roles stored in Azure AD for our users to assert.  So how do we assert a role from Azure AD if the listing of the roles exists in AWS?  The wonderful concept of application programmatic interfaces (APIs) swoops in and saves the day.  Don’t get me wrong, if you hate yourself you can certainly provision them manually by modifying the application manifest file every time a new role is created or deleted.  However, there is an easier route of having Azure AD pick up those roles directly from AWS on an automated schedule.  How does this work?  Well nothing works better than demonstrating how the roles can be queried from the AWS API.

The AWS SDK for .NET makes querying the API incredibly easy.  We’re not stuck worrying about assembling the request and signing it.  As you can see below the script is six lines of code in PowerShell.

Script.png

The result is a listing of the roles configured in AWS which includes the AzureADEC2Admins role I created earlier.  This example demonstrates the power a robust API brings to the table when integrating cloud services.

2

When Microsoft speaks of provisioning in regards to the AWS integration, they are talking about provisioning the roles defined in AWS to the the application manifest file in Azure AD.  This provides us with the ability to assign the roles from within the Azure Portal as we’ll see later.  This differs from many of the Azure AD integrations I’ve observed in the past where it will provision a record for the user into the software as a service (SaaS) offering.  Below is a simple diagram of the provisioning process.3

To do support provisioning we need to navigate to the AWS Management Console, open the Services Menu, and select IAM.  We then select Users and hit the Add User button.  I named the user AzureAD, gave it programmatic access type, and attached the IAMReadOnlyAccess policy.  AWS then presented me with the access key ID and secret access key I’ll need to provide to Azure AD.  Yes, we are going to follow security best practices and provide the account with the minimum rights and permissions it needs to provide the functionality.  The Microsoft tutorial instructs you to generate the credentials under the context of the AWS administrator effectively giving the application full rights to the AWS account.  No Microsoft, just no.

I next bounce back to the Azure portal and to the AWS application configuration.  From here I select the Provisioning option, switch the drop-down box to Automatic, and plug the access key ID into the clientsecret field and the secret access key into the secret token field.  A quick test connection shows success and I then save the configuration.  Note that you must first save the configuration before you can turn on the synchronization.

4

After the screen refreshes I move down to the Settings section and turn the Provisioning Status to On and set the Scope to Sync only assigned users and groups (kind of a moot point for this, but oh well).  I then Save the configuration once again and give it about 10 minutes to pull down the roles.

I then navigate back to the Users and Groups section and edit the Rick Sanchez assignment.  Hitting the role option now shows me the AzureADEC2Admins role I configured in AWS IAM.


5.png

Let’s take another look at the service principle representing the AWS application in PowerShell.  Using the Azure AD PowerShell cmdlets I referenced in entry 2 we connect to Azure AD and run the cmdlet Get-AzureADServicePrincipal which when run shows the manifest has been updated to include the newly synchronized application role.

6

We’ve configured the SAML trust on both ends, defined the necessary attributes, setup synchronization, and assigned Rick Sanchez an IAM role. In a moment we’ll demonstrate all of the pieces coming together.

Before I wrap it up, I want to quickly mention a few issues I ran into with this integration that seemed to resolve themselves without any intervention.

  1. Up to a few nights ago I was unable to get the Provisioning piece working.  I’m not putting it past user error (this is me we’re talking about) but I tried numerous times and failed but was successful a few nights ago.  I also noticed from some recent comments in the Microsoft tutorial people complaining of similar errors.  Maybe something broke for a bit?
  2. The value of the audience attribute in the audienceRestriction section of the SAML assertion generated by Azure AD doesn’t match the identifier within the AWS federation metadata.  Azure AD inserts some garbage looking audience value by default which was causing the assertions to be rejected by AWS.  After setting the identifier to the value of urn:amazon:webservices as referenced in the AWS federation metadata the assertion was consumed without issue.  I saw similar complaints in the Microsoft tutorial so I’m fairly confident this wasn’t just my issue.The story gets a bit stranger.  I wanted to demonstrate the behavior for this series by removing the identifier I had previously added.  Oddly enough the assertion was consumed without issue by AWS.  I verified using Fiddler that the audience value was populated with that garbage entry.  Either way, I would err on the side of caution and would recommend populating the identifier with the entry referenced in the AWS metadata as seen below.7.png

The last thing I want to point out is the Microsoft tutorial states that you are required to create the users in AWS prior to asserting their identity.  This is inaccurate as AWS does not require a user record to be pre-created in AWS.  This is different from a majority (if not all) of the SaaS integrations I’ve done in the past so this surprised me as well.  Either way, it’s not required which is a nice benefit if you’ve ever had to deal with the challenging of managing the identify lifecycle across cloud offerings.

Let’s wrap up this series by having Rick Sanchez log into the AWS Management Console and shutdown an EC2 instance.  Here I have logged into the Windows 10 machine named CLIENT running in Azure.  We navigate to https://myapps.microsoft.com and log into Azure AD as Rick Sanchez.  We then hit the Amazon Web Services icon and are seamless logged into the AWS Management Console.

8.png

Examining the assertion in Fiddler shows  the Role and RoleSessionName claims in the assertion.

9.png

Navigating to the EC2 Dashboard displays the instance I prepared earlier using my primary account.  Rick has full rights over administration of the instance for activities such as starting and starting the instance.  After successfully terminating the instance I log into the AWS Management Console as my primary AWS account and go to CloudTrail and see the log entries recording the activities of Rick Sanchez.

10.png

With that let’s cover some key pieces of information to draw from the series.

  1. The Azure AD and AWS integration differs from most SaaS integrations I’ve done when it comes to user provisioning.  Most of the time a user record must exist prior to the user authenticating.  There are a growing number of SaaS providers provisioning upon successful authentication as provisioning challenges grow to further consumption of cloud services, but they are still few and far between.  AWS does a solid job with eliminating the pain of pre-provisioning users.
  2. The concept of associating roles with specific identity providers is really neat on Amazon’s part.  It allows the customer to manage permissions and associate those permissions with roles in AWS, but delegate the right on a per identity provider basis to assert a specific set of roles.
  3. Microsoft’s definition of provisioning in this integration is pulling a listing of roles from AWS and making them configurable in the Azure Portal.
  4. The AWS API is solid and quite easy to leverage when using the AWS SDKs. I would like to see AWS switch from what seems to be proprietary method of application access to OAuth to become more aligned with the rest of the industry.
  5. Don’t trust vendors to make everything point and click. Take the time to understand what’s going on in the background. In a SAML integration such as this, a quick review of the metadata can save you a lot of headaches when troubleshooting issues.

I learned a ton about AWS over these past few weeks and also got some good deep dive time into Azure AD which I haven’t had time for in a while.  Hopefully you found this series valuable and learned a thing or two yourself.

In my next series I plan on writing a simple application to consume the Cognito service offered by AWS.  For those of you more familiar with the Microsoft side of the fence, it’s similar to Azure AD B2C but with some unique features Microsoft hasn’t put in place yet making a great option to solve those B2C identity woes.

Thanks and have a wonderful holiday!

Integrating Azure AD and AWS – Part 3

Welcome!  This entry continues my series in the integration of Azure AD and AWS.  In my first entry I covered what the advantages of the integration are.  In the second entry I walked through my lab configuration and went over what happens behind the scenes when an application is added to Azure AD from the application gallery.  In this post I’m going to walk through some of the configuration we need to do in both Azure AD and AWS.  I’ll also be breaking open the Azure AD and AWS metadata and examining the default assertion sent by Microsoft out of the box.

In my last entry I  added the AWS application to my Azure AD tenant from the Azure AD Application Gallery.  The application is now shown as added in the All Applications view of the Azure Active Directory blade for my tenant.

pic1.png

After selecting AWS from the listing of applications I’m presented with a variety of configuration options.  Starting with Properties we’re provided with some general information and configuration options.  We need to ensure that the application is enabled for users to sign-in and that it’s visible to users so we can select it from the access panel later on.  Notice also that that I’m configuring the application to require the user be assigned to the application.pic2

On the Users and groups page I’ve assigned Rick Sanchez to the application to allow the account access and display it on the access panel.

pic3

After waiting about 10 minutes (there is a delay in the time it takes for the application to appear in the application panel) I log into the Access Panel as Rick Sanchez and we can see that the AWS app has been added for Rick Sanchez.

pic4

Back to the properties page of the AWS application, my next stop is the Single sign-on page. Here I drop down the Single Sign-on Mode drop box and select SAML-based Sign-on option. Changing the mode to SAML-based Sign-on exposes a ton of options. The first option that caught my eye was the Amazon Web Services (AWS) Domain and URLs. Take notice of the note that says Amazon Web Services (AWS) is pre-integrated with Azure AD and requires no mandatory URL settings. Yeah, not exactly true as we progress through this series.

pic5.png

Further down we see the section that allows us to configure the unique user identifier and additional attributes.   By default Microsoft includes the name, givenName, surName, and emailAddress claims.  I’ll need to make some changes there to pass the claims Amazon requires, but let’s hold off on that for now.

pic6.png

Next up a copy of the Azure AD metadata (IdP metadata) is provided for download.  Additionally some advanced options are available which provide the capability to sign the SAML response, assertion, or both as well as switching the hash algorithm between SHA1 and SHA256.

pic7.png

Now like any nerd, I want to poke around the IdP metadata and see what the certificate Azure AD is using to sign looks like.  Opening up the metadata in a web browser parses the XML and makes the format look pretty.  From there I grab the contents X509Certificate tag (the base-64 encoded public-key certificate), dump it to Notepad, and renam it with a file extension of cer.  Low and behold, what do we see but a self-signed certificate.  This is a case where I can see the logic that the operational overhead is far greater than the potential security risk.  I mean really, does anyone want to deal with the challenge of hundreds of thousands of customers not understanding the basics of public key infrastructure and worrying about revocation, trust chains, and the like?  You get a pass Microsoft… This time anyway.

pic8.png

Before I proceed with the next step in the configuration, let’s take a look at what the assertion looks like without any of the necessary configuration.  For this I’ll use Fiddler to act as a man-in-the-middle between the client and the web.  In session 6 of the screenshot below we see that the SAML response was returned to the web browser from Azure.

pic9.png

Next up we extract that information with the Text Wizard, base-64 decode it, copy it to Notepad, save it as an XML file, and open it with IE.  The attributes containing values of interest are as follows:

  • Destination – The destination is the service provider assertion consumer URI

pic10.png

  • NameID – This is the unique identifier of the used by the service provider to identify the user accessing the service

pic11.png

  • Recipient– The recipient attribute references the service the assertion is intended for.  Oasis security best practices for SAML require the service provider to verify this attribute match the URI for the service provider assertion consumer URI

pic12.png

  • Audience – The audience attribute in the audienceRestriction section mitigates the threat of the assertion being stolen and used to impersonate a user.  Oasis security best practices require the service provider to verify this when the assertion is received to ensure it is recognizes the identifier.  The way in which this is accomplished is the value in the audience attribute is checked against the service provider EntityID attribute.

pic13.png

Additionally we have some interesting claims including tenantid, objectidentifier of the user object in Azure AD, name, surname, givenname, displayname, identityprovider, and authnmethosreferences.  I don’t think any of these need further explanation.

pic14.png

Let’s now take a look at the AWS (service provider in SAML terms) metadata.  The AWS metadata is available for download from here.  After it’s downloaded it can be opened with IE.

pic15.png

The fields of interest in this set of metadata is:

  • EntityID – The entityID is the unique identifier AWS will provide in its authentication requests.  Let’s note the value of urn:amazon:webservices for later as it will come in handy due to some issues with Microsoft’s default settings.

pic16

  • NameIDFormat – This tells me both transient and persistent are accepted.  I won’t go into details on Name ID format, you can review that for yourself in the Oasis standard.  Suffice to say the Name ID format required by the service provider can throw some wrenches into integrations when using a more basic security token service (STS) like AD FS.

pic17

  • AssertionConsumerService – This is where our browser will post back the SAML assertion after a successful authentication.  Note the URI in the location field.

pic18.png

  • RequestedAttributes – This provides us with a listing of all the attributes AWS will accept in an assertion.  Note that the only two required attributes are Role and RoleSessionName.

We’ve added the AWS application to Azure AD, granted a user access to the application, and have started the SAML setup within Azure AD (Identity Provider).  Let’s continue that setup by configuring which attributes Azure AD will include in the assertions delivered to AWS.  From review of the AWS metadata we know that we need to  send claims of Role and RoleSessionName.  The RoleE will match to an an AWS IAM Role handling authorization of what we can do within AWS and the RoleSessionName provides a unique identifier for the user asserting the entitlement.

Back in the Azure AD Portal I’m going to click the option to View and edit all other user attributes.  The exposes the attributes Microsoft sends by default.  These include givenName, suName, emailAddress, and name.  Since the AWS metadata only requires RoleSessionName and Role, I’m going to delete the other attributes.  No sense in exposing additional information that isn’t needed!

pic19.png

After the extra attributes are deleted I create the two required attributes as seen in the screenshot below.

pic20.png

I’m now going to bounce over to the AWS Management Console.  After logging in I navigate to the Services menu and choose IAM.

pic21.png

On the IAM menu I choose the Identity providers menu item and hit the Create Provider button.

pic22.png

On the next screen I’m required to configure the identity provider settings.  I choose SAML from the drop-down box enter a provider name of MAAD and upload the IdP metadata I downloaded from Azure AD referenced earlier in the blog entry and hit the Next Step button.

pic23.png

On the next page I verify the provider name and the type of identity provider and hit the Create button.  Once that is complete I see the new entry listed in identity providers list.  Easy right?

pic24.png

We have an identity provider, but that identity provider needs some IAM roles to be associated with the identity provider that my fictional users can assert.  For that I go to the Roles section and hit the Create Role button.

pic25.png

On the next screen I select the SAML button as the type of trusted entity since the role is going to be asserted via the SAML trust with Azure AD.  Here I select the MAAD provider and choose the option to allow the users to access both the AWS Management Console and the API and then hit the Next: Permissions button.

pic26.png

As I referenced in my first entry to this series, the role I’m going to create is going to be capable of managing all EC2 instances.  For that I choose the AmazonEC2FullAccess policy template and then hit the Next:Review button.

pic267.png

On the last screen I name the new role AzureADEC2Admins, write a short description, and hit the Create Role button.

pic28.png

The new role is created and can be seen associated to the identity provider representing the trust between AWS and Azure AD.

pic29.png

Let’s sum up what we did for this entry.  We examined the key settings Microsoft exposes for configuration with the AWS integration.  We examined the Azure AD (IdP) and AWS (SP) metadata to understand which settings are important to this integration and what those settings do.  We examined an assertion generated out of Azure AD prior to any of the necessary customization being completed to understand what a canned assertion looks like.  Finally, we completed a majority of the tasks we need to complete on the AWS side to create the SAML trust on the AWS end and to create a role JoG users can asserts.  Are your eyes bleeding yet?

In my last post in this series I’ll walk through the rest of the configuration needed on the Azure AD end.  This will include going over some of the mistakes the Microsoft tutorial makes as well as covering configuration of Azure AD’s provisioning integration as to what it means and how we can effectively configure it.  Finally, we’ll put all the pieces of the puzzle together, assert our identity, and review logs at AWS to see what they look like when a federated user performs actions in AWS.

The journey continues in my fourth entry.

Integrating Azure AD and AWS – Part 2

Today I will continue the journey into the integration between Azure AD and Amazon Web Services.  In my first entry I covered the reasons why you’d want to integrate Azure AD with AWS and provided a high-level overview of how the solution works.  The remaining entries in this series will cover the steps involved in completing the integration including deep dives into the inner workings of the solution.

Let me start out by talking about the testing environment I’ll be using for this series.

lab.png

The environment includes three virtual machines (VMs) running on Windows Server 2016 Hyper V on a server at my house.  The virtual machines consists three servers running Windows Server 2016 with one server acting as a domain controller for the journeyofthegeek.local Active Directory (AD) forest, another server running Active Directory Federation Services (AD FS) and Azure AD Connect (AADC), and the third server running MS SQL Server and IIS.  The IIS instance hosts a .NET sample federated application published by Microsoft.

In Microsoft Azure I have a single Vnet configured for connectivity to my on-premises lab through a site-to-site IPSec virtual private network (VPN) I’ve setup with pfSense.  Within the Vnet exists a single VM running Windows 10 that is domain-joined to the journeyofthegeek.local AD domain.  The Azure AD tenant providing the identity backend for the Microsoft Azure subscription is synchronized with the journeyofthegeek.local AD domain using Azure AD Connect and is associated with the domain journeyofthegeek.com.  Authentication to the Azure AD tenant is federated using my instance of AD FS.  I’m not synchronizing passwords and am using an alternate login ID with the user principal name being synchronized to Azure AD being stored in the AD attribute msDS-CloudExtensionAttribute1.  The reason I’m still configured to use an alternate login ID was due to some testing I needed to do for a previous project.

I created a single test user in the journeyofthegeek.local Active Directory domain named Rick Sanchez with a user principal name (UPN) of rick.sanchez@journeyofthegeeklocal and msDS-CloudExtensionAttribute1 of  rick.sanchez@journeyofthegeek.com.  The only attribute to note that the user has populated is the mail attribute which has the value of rick.sanchez@journeyofthegeek.com.  The user is being synchronized to Azure AD via the Azure AD Connect instance.

In AWS I have a single elastic compute cloud (EC2) instance running Windows Server 2016 within a virtual private cloud (VPC).  I’ll be configuring Azure AD as an identity provider associated with the AWS account and will be associating an AWS IAM role named AzureADEC2Admins.  The role will grant full admin rights over the management of the EC2 instances associated to the account via the AmazonEC2FullAccess permissions policy.

Let’s begin shall we?

The first step I’ll be taking is to log into the Azure Portal as an account that is a member of the global admins and navigate to the Azure Active Directory blade.  From there I select Enterprise Applications blade and hit the New Application link.

entapps.png

I then search the application gallery for AWS, select the AWS application, accept the default name, and hit the Add button.  Azure AD will proceed to add the application and will then jump to a quick start page.  So what exactly does it mean to add an application to Azure AD?  Good question, for that we’ll want to use the Azure AD cmdlets.  You can reference this link.

Before we jump into running cmdlets, let’s talk very briefly about the concept of application identities in AAD.  If you’ve managed Active Directory Domain Services (AD DS), you’re very familiar with the concept of service accounts.  When you needed an application (let’s call it a non-human to be more in-line with industry terminology) to access AD-integrated resources directly or on-behalf of a user you would create a security principal to represent the non-human.  That security principal could be a user object, managed service account object, or group managed service account object.  You would then grant that security principal rights and permissions over the resource or grant it the right to impersonate a user and access the resource on the user’s behalf.  The part we want to focus in on is the impersonation or delegation to access a resource on the behalf of a user.  In AD DS that delegation is accomplished through the Kerberos protocol.

When we shift over to AAD the same basic concepts still exist of creating a security principal to represent the application and granting that application direct or delegated access to a resource.  The difference is the protocol handling the access shifts from Kerberos to OAuth 2.0.  One thing many people become confused about is thinking that OAuth handles authentication.  It doesn’t.  It has nothing do with authentication and everything to do with authorization, or more clearly delegation.  When we add an application the AAD a service principal object and sometimes application object (in AWS instance both are created) are created in the AAD tenant to represent the application.  I’m going to speak to the service principal object for the AWS integration, but you can read through this link for a good walkthrough on application and service principal objects and how they differ.

Now back to AWS.  So we added the application and we now have a service principal object in our tenant representing AWS.  Here is a few of the attributes for the object pulled via PowerShell.

awsservice.png

Review of the attributes for the object provide a few pieces of interesting information.   We’ll get to experiment more with what these mean when we start doing Fiddler captures, but let’s talk a bit about them now.  The AppRoles attribute provides a single default role of msiam_access.  Later on we’ll be adding additional roles that will map back to our AWS IAM roles.

Next up we have the KeyCredentials which contains two entries.  This attribute took me a while to work out.  In short, based upon the startDate, I think these two entries are referencing the self-signed certificate included in the IdP metadata that is created after the application is added to the directory.  I’ll cover the IdP metadata in the next entry.

The Oauth2Permissions are a bit funky for this use case since we’re not really allowing the application to access AWS on our behalf, but rather asking it to produce a SAML assertion asserting our identity.  Maybe the delegation can be thought of as delegating Azure AD the right to create logical security tokens representing our users that can be used to assert an identity to AWS.

The PasswordCredentials contains a single entry which shares the same KeyID as the KeyCredential.  As best I can figure from reading the documentation is this would normally contain entries for client keys when not using certificate authentication.  Given that it contains a single entry with the same KeyID as the KeyCredential for signing, I can only guess it will contain an entry even with a certificate is used to authenticate the application.

The last attributes of interest are the PreferredTokenSigningKeyThumbprint which references the certificate within the IdP metadata and the replyURLs which is the assertion consumer URI for AWS.

So yeah, that’s what happens in those 2 or 3 seconds the AWS application is registered with Azure AD.  I found it interesting how the service principal object is used to represent trust between Azure AD and AWS and all the configuration information attached to the object after the application is simply added.  It’s nice to have some of the configuration work done for us out of the box, but there is much more to do.

In the next entry I’ll walk through the Quick Start for the AWS application configuration and explore the metadata Azure AD creates.

The journey continues in my third entry.

Deep dive into AD FS and MS WAP – User Certificate Authentication through a WAP

Hi everyone,

Today I continue my series of posts that cover a behind the scenes look at how Active Directory Federation Service (AD FS) and the Microsoft Web Application Proxy (WAP) interact.  In my first post  I explained the business cases that would call for the usage of a WAP.  In my second post I did a deep dive into the WAP registration process (MS refers to this as the trust establishment with AD FS and the WAP).  In this post I decided to cover how user certificate authentication is achieved when AD FS server is placed behind the WAP.

AD FS offers a few different options to authenticate users to the service including Integrated Windows Authentication (IWA), forms-based authentication, and certificate authentication.  Readers who work in environments with sensitive data where assurance of a user’s identity is important should be familiar with certificate authentication in the Microsoft world.  If you’re unfamiliar with it I recommend you take a read through this Microsoft article.

With the recent release of the National Institute of Standards and Technology (NIST) Digital Identity Guidelines 800-63 which reworks the authenticator assurance levels (AAL) and relegates passwords to AAL1 only, organizations will be looking for other authenticator options.  Given the maturity of authenticators that make use of certificates such as the traditional smart card it’s likely many organizations will look at opportunities for how the existing equipment and infrastructure can be further utilized.  So all the more important we understand how AD FS certificate authentication works.

I’ll be using the lab I described in my first post.  I made the following modifications/additions to the lab:

  • Configure Active Directory Certificate Services (AD CS) certificate authority (CA) to include certificate revocation list (CRL) distribution point (CDP).  The CRLs will be served up via an IIS instance with the address crl.journeyofthegeek.com.  This is the only CDP listed in the certificates.  Certificates created during my original lab setup that are installed within the infrastructure do not include a CDP.
  • Added a non-domain-joined Windows 10 computer which be used as the endpoint the test user accesses the federation service from.

Tool-wise I used ProcMon, Fiddler, API Monitor, and WireShark.

So what did I discover?

Prior to doing any type of user interaction, I setup the tools I would be using moving forward.  On the WAP I started ProcMon as an Administrator and configured my filters to capture only TCP Send and TCP Receive operations.  I also setup WireShark using a filter of ip.addr==192.168.100.10 && tcp.port==80.  The IP address is the IP of the web server hosting my CRLs.  This would ensure I’d see the name of the process making the connection to the CDP as well as the conversation between the two nodes.

pic1

** Note that the machine will cache the CRLs after they are successfully downloaded from the CDP.  It will not make any further calls until the CRLs expire.  To get around this behavior while I was testing I ran the command certutil -setreg chain\ChainCacheResyncFiletime @now as outlined in this article.   This forces the machine to pull the CRLs again from the CDP regardless of whether or not they are expired.  I ran the command as the LOCAL SYSTEM security principal using psexec.

The final step was to start Fiddler as the NETWORK SERVICE security principal using the command psexec -i -u “NT AUTHORITY\Network Service” “C:\Program Files (x86)\Fiddler2\Fiddler.exe”.  Remember that Fiddler needs the public key certificate in the appropriate file location as I outlined in my last post.  Recall that the Web Application Proxy Service and the Active Directory Federation Service running on the WAP both run as that security principal.

Once all the tools were in place I logged into the non-domain joined Windows 10 box and opened up Microsoft Edge and popped the username of my test user into the username field.

pic2.png

After home realm discovery occurred within Azure AD, I received the forms-based login page of my AD FS instance.

 

pic3.png

Let’s take a look at what’s happened on the WAP so far.

In the initial HTTP Connect session the WAP makes to the AD FS farm, we see that the ClientHello handshake occurs where the WAP authenticates to the AD FS server to authenticate itself as described in my last post.

pic4.png

Once the secure session is established the WAP passes the HTTP GET request to the AD FS server.  It adds a number of headers to the request which AD FS consumes to identify the client is coming from the WAP.  This information is used for a number of AD FS features such as enforcing additional authentication policies for Extranet access.

pic5.png

The WAP also passes a number of query strings.  There are a few interesting query strings here.  The first is the client-request-id which is a unique identifier for the session that AD FS uses to correlate event log errors with the session.  The username is obvious and shows the user’s user principal name that was inputted in the username field at the O365 login page.  The wa query string shows a value of wsignin1.0 indicating the usage of WS-Federation.  The wtrealm indicates the relying party identifier of the application, in this case Azure AD.

pic6

The wctx query string is quite interesting and needs to be parsed a bit on its own.  Breaking down the value in the parameter we come across three unique parameters.

LoginOptions=3 indicates that the user has not selected the “Keep me signed in” option.  If the user had selected that checkbox a value of 1 would have been passed and AD FS would create a persistent cookie which would exist even after the browser closes.  This option is sometimes preferable for customers when opening documents from SharePoint Online so the user does not have to authenticate over and over.

The estsredirect contains the encoded and signed authentication request from O365.  I stared at API monitor for a few hours going API call by API call trying to identify what this looks like once it’s decoded, but was unsuccessful.  If you know how to decode it, I’d love to know.  I’m very curious as to its contents.

The WAP next makes another HTTP GET to the AD FS server this time including the additional query string of pullStatus which is set equal to 0.  I’m clueless as to the function on of this, I couldn’t find anything.  The only other thing that changes is the referer.

My best guess on the above two sessions is the first session is where AD FS performs home realm discovery and maybe some processing on to determine if there are any special configurations for the WAP such as limited or expanded authentication options (device authN, certAuthN only).  The second session is simply the AD FS server presenting the authentication methods configured for Extranet users.

The user then chooses the “Sign in with an X.509 certificate” (I’m not using SNI to host both forms and cert authN on the same port) and the WAP then performs another HTTP CONNECT to port 49443 which is the certificate authentication endpoint on the AD FS server.  It again authenticates to the AD FS server with its client certificate prior to establishing the secure tunnel.

The third session we see a HTTP POST to the AD FS server with the same query parameters as our previous request but also providing a JSON object with a key of AuthMethod and the key value combination of AuthMethod=CertificateAuthentication in the body.

pic7

The next session is another HTTP POST with the same JSON object content and the key value pairs of AuthMethod=CertificateAuthentication and RetrieveCertificate=1 in the body.  The AD FS server sends a 307 Temporary Redirect to the /adfs/backendproxytls/ endpoint on the AD FS server.

Prior to the redirect completing successful we see the calls to the CDP endpoint for the full and delta CRLs.

pic8.png

pic9

I was curious as to which process was pulling the CRLs and identified it was LSASS.EXE from the ProcMon capture.

pic10

At the /adfs/backendproxytls/ endpoint the WAP performs another HTTP POST this time posting a JSON object with a number of key value combinations.

pic11.png

The interesting key value types included in the JSON object are the nested JSON object for Headers which contains all the WAP headers I covered earlier.  The query string JSON object which contains all the query strings I covered earlier.  The SeralizedClientCertificate contains the certificate the user provided after selecting to use certificate authentication.  The AD FS server then sends back a cookie to the WAP.  This cookie is the cookie the representing the user’s authentication to the AD FS server as detailed in this link.

pic12.png

The WAP then performs a final HTTP GET back at the /adfs/ls/ endpoint including the previously described headers and query strings as well as provided the cookie it just received.  The AD FS server responds by providing the assertion requested by Microsoft along with a MSISAuthenticated, MSISSignOut, and MSISLoopDetectionCookie cookies which are described in the link above.

What did we learn?

  1. The certificate is checked at both the WAP and the AD FS server to ensure it is valid and issued from a trusted certificate authority.  Remember to verify you trust the certificate chain of any user certificates on both the AD FS servers and WAPs.
  2. CRL Revocation checking is enabled by default and is performed on both the AD FS server and the WAP.  Remember to verify the locations in your CDP are available by both devices.
  3. The AD FS servers use the LSALogonUser function in the secur32.dll library to perform standard certificate authentication to Active Directory Domain Services.  I didn’t include this, but I captured this by running API monitor on the AD FS server.

In short, if you’re going to use device authentication or user certificate authentication make sure you have your PKI components in order.

See you next post!