The Evolution of AD RMS to Azure Information Protection – Part 4 – Preparation and Server-Side Migration

The Evolution of AD RMS to Azure Information Protection – Part 4 – Preparation and Server-Side Migration

The time has finally come to get our hands dirty.  Welcome to my fourth post on the evolution of Active Directory Rights Management Service (AD RMS) to Azure Information Protection (AIP).  So far in this series I’ve done an overview of the service, a comparison of the architectures, and covered the key planning decisions that need to place when migrating from AD RMS to AIP.  In this post I’ll be performing the preparation and server-side migration steps for the migration from AD RMS to AIP.

Microsoft has done a wonderful job documenting the steps for the migration from AD RMS to AIP within the migration guide.  I’ll be referencing back to the guide as needed throughout the post.  Take a look at my first post for a refresher of my lab setup.  Take note I’ll be migrating LAB2 and will be leaving LAB1 on AD RMS.  Here are some key things to remember about the lab:

  • There is a forest trust between the JOG.LOCAL and GEEKINTHEWEEDS.COM Active Directory Forests
  • AD RMS Trusted User Domains (TUDs) have been configured on both JOG.LOCAL and GEEKINTHEWEEDS.COM

I’ve created the following users, groups, and AD RMS templates (I’ve been on an 80s/90s movies fix, so enjoy the names).

  • GEEKINTHEWEEDS.COM CONFIGURATION
    • (User) Jason Voorhies
      • User Principal Name Attribute: jason.voorhies@geekintheweeds.com
      • Mail Attribute: jason.voorhies@geekintheweeds.com
      • Group Memberships: Domain Users, GIW Employees, Information Technology
    • (User) Theodore Logan
      • User Principal Name Attribute: theodore.logan@geekintheweeds.com
      • Mail Attribute: theodore.logan@geekintheweeds.com
      • Group Memberships: Domain Users, GIW Employees, Information Technology
    • (User) Ash Williams
      • User Principal Name Attribute: jason.voorhies@geekintheweeds.com
      • Mail Attribute: ash.williams@geekintheweeds.com
      • Group Memberships: Domain Users, GIW Employees, Accounting
    • (User) Michael Myers
      • User Principal Name Attribute: michael.myers@geekintheweeds.com
      • Mail Attribute: michael.myers@geekintheweeds.com
      • Group Memberships: Domain Users, GIW Employees, Accounting
    • (Group) Accounting
      • Mail Attribute: giwaccounting@geekintheweeds.com
      • Group Type: Universal Distribution
    • (Group) GIW Employees
      • Mail Attribute: giwemployees@geekintheweeds.com
      • Group Type: Universal Distribution
    • (Group) Information Technology
      • Mail Attribute: giwit@geekintheweeds.com
      • Group Type: Universal Distribution
    • (Group) GIW AIP Users
      • Group Type: Global Security
    • (AD RMS Template) GIW Accounting
      • Users: giwaccounting@geekintheweeds.com
      • Rights: Full Control
    • (AD RMS Template) GIW Employees
      • Users: giwemployees@geekintheweeds.com
      • Rights: View, View Rights
    • (AD RMS Template) GIW IT
      • Users: giwit@geekintheweeds.com
      • Rights: Full Control
  • JOG.LOCAL CONFIGURATION

    • (User) Luke Skywalker
      • User Principal Name Attribute: luke.skywalker@jog.local
      • Mail Attribute: luke.skywalker@jog.local
      • Group Memberships: Domain Users, jogemployees
    • (User) Han Solo
      • User Principal Name Attribute: han.solo@jog.local
      • Mail Attribute: han.solo@jog.local
      • Group Memberships: Domain Users, jogemployees
    • (Group) jogemployees
      • Mail Attribute: jogemployees@jog.local
      • Group Type: Universal Distribution

After my lab was built I performed the following tests:

  • Protected Microsoft Word document named GIW_LS_ADRMS with GEEKINTHEWEEDS.COM AD RMS Cluster and successfully opened with Luke Skywalker user from JOG.LOCAL client machine.
  • Protected Microsoft Word document named GIW_GIWALL_ADRMS with GEEKINTHEWEEDS.COM AD RMS Cluster and GIW Employees template and unsuccessfully opened with Luke Skywalker user from JOG.LOCAL client machine.
  • Protected Microsoft Word document named GIW_JV_ADRMS with GEEKINTHEWEEDS.COM AD RMS Cluster using Theodore Logan user and opened successfully with Jason Voorhies user from GEEKINTHEWEEDS.COM client machine.
  • Protected Microsoft Word document named JOG_MM_ADRMS with JOG.LOCAL AD RMS Cluster using Luke Skywalker user and opened successfully with Michael Myers user from GEEKINTHEWEEDS.COM client machine.
    Protected Microsoft Word document named GIW_ACCT_ADRMS with GEEKINTHEWEEDS.COM AD RMS Cluster and GIW Accounting template and was unsuccessful in opening with Jason Voorhies user from GEEKINTHEWEEDS.COM client machine.

These tests verified both AD RMS clusters were working successfully and the TUD was functioning as expected.  The lab is up and running, so now it’s time to migrate to AIP!

Our first step is to download the AADRM PowerShell module from Microsoft.  I went the easy route and used the install-module cmdlet.

AIP4PIC1.png

Back in March Microsoft announced that AIP would be enabled by default on any eligible tenants with O365 E3 or above that were added after February.  Microsoft’s migration guide specifically instructs you to ensure protection capabilities are disabled if you’re planning a migration from AD RMS to AIP.  This means we need to verify that AIP is disabled.  To do that, we’re going to use the newly downloaded AADRM module to verify the status.

AIP4PIC2.png

As expected, the service is enabled.  We’ll want to disable the service before beginning the migration process by running the Disable-Aadrm cmdlet.  After running the command, we see that the functional state is now reporting as disabled.

AIP4PIC3.png

While we have the configuration data up we’re going to grab the value (minus _wmcs/licensing) in the  LicensingIntranetDistributionPointUrl property.  We’ll be using this later on in the migration process.

In most enterprise scenarios you’d want to perform a staged migration process of your users from AD RMS to AIP.  Microsoft provides for this with the concept of onboarding controls.  Onboarding controls allow you to manage who has the ability to protect content using AIP even when the service has been enabled at the tenant level.  Your common use case would be creating an Azure AD-managed or Windows AD-synced group which is used as your control group.  Users who are members of the group and are licensed appropriately would be able to protect content using AIP.  Other users within the tenant could consume the content but not protect it.

In my lab I’ll be using the GIW AIP Users group that is being synchronized to Azure AD from my Windows AD as the control group.  To use the group I’ll need to get its ObjectID which is the object’s unique identifier in Azure AD.  For that I used the Get-AzureADGroup cmdlet within Azure AD PowerShell module.

AIP4PIC6

Microsoft’s migration guide next suggests come configuration modifications to Windows computers within the forest.  I’m going to hold off on this for now and instead begin the server-side migration steps.

First up we’re going to export the trusted publisher domains (TPDs) from the AD RMS cluster.  We do this to ensure that users that have migrated over to AIP are still able to open content that was previously protected by the AD RMS cluster.  The TPD includes the Server Licensor Certificate (SLC) keys so when exporting them we protect them with a password and create an XML file that includes the SLC keys and any custom AD RMS rights policy templates.

AIP4PIC7.png

Next we import the exported TPD to AIP using the relevant process based upon how we chose to protect the cluster keys.  For this lab I used a software key (stored in the AD RMS database) so I’ll be using the software key to software key migration path.  Thankfully this path is quite simple and consists of running another cmdlet from the AADRM PowerShell module.  In the first command we store the password used to protect the TPD file as a secure string and use the Import-AadrmTpd cmdlet to pull it into AIP.  Notice the resulting data provides the cluster friendly name, indicates the Cryptographic Mode was set to 1, the key was a Microsoft-managed (aka software key) and there were three rights policy templates attached to the TPD.

Keep in mind that if you have multiple TPDs for some reason (let’s say you migrated from Cryptographic Mode 1 to Cryptographic Mode 2) you’ll need to upload each TPD separately and set the active TPD using the Set-AadrmKeyProperties cmdlet.

AIP4PIC8.png

Running a Get-AadrmTemplate shows the default templates Microsoft provides you with as well as the three templates I had configured in AD RMS.

AIP4PIC9

The last step of the server side of the process is to activate AIP.  For that we use the Enable-Aadrm cmdlet from the AADRM PowerShell module.

AIP4PIC10

At this point the server-side configuration steps have been completed and AIP is ready to go.  However, we still have some very important client-side configuration steps to perform.  We’ll cover those steps in my post.

Have a great week!

AWS and Microsoft’s Cloud App Security

AWS and Microsoft’s Cloud App Security

It seems like it’s become a weekly occurrence to have sensitive data exposed due to poorly managed cloud services.  Due to Amazon’s large market share with Amazon Web Services (AWS) many of these instances involve publicly-accessible Simple Storage Service (S3) buckets.  In the last six months alone there were highly publicized incidents with FedEx and Verizon.  While the cloud can be empowering, it can also be very dangerous when there is a lack of governance, visibility, and acceptance of the different security mindset cloud requires.

Organizations that have been in operation for many years have grown to be very reliant on the network boundary acting as the primary security boundary.  As these organizations begin to move to a software defined data center model this traditional boundary quickly becomes less and less effective.  Unfortunately for these organizations this, in combination with a lack of sufficient understanding of cloud, gives rise to mistakes like sensitive data being exposed.

One way in which an organization can protect itself is to leverage technologies such as cloud access security brokers (cloud security gateways if you’re Forrester reader) to help monitor and control data as it moves travels between on-premises and the cloud.  If you’re unfamiliar with the concept of a CASB, I covered it in a previous entry and included a link to an article which provides a great overview.

Microsoft has its own CASB offering called Microsoft Cloud App Security (CAS).  It’s offered as part of Microsoft’s Enterprise Mobility and Security (EMS) E5/A5 subscription.  Over the past several months multiple connectors to third party software-as-a-service (SaaS) providers have been introduced, including one for AWS.  The capabilities with AWS are limited at this point to pulling administrative logs and user information but it shows promise.

As per usual, Microsoft provides an integration guide which is detailed in button pushing, but not so much in concepts and technical details as to what is happening behind the scenes.  Since the Azure AD and AWS blog series has attracted so many views, I thought it would be fun and informative to do an entry for how Cloud App Security can be used with AWS.

I’m not in the habit of re-creating documentation so I’ll be referencing the Microsoft integration guide throughout the post.

The first thing that needs doing is the creation of a security principal in AWS Identity and Access Management (AWS IAM) that will be used by your tenant’s instance of CAS to connect to resources in your AWS account.   The first four steps are straightforward but step 5 could a bit of an explanation.

awscas1.pngHere we’re creating a custom IAM policy for the security principal granting it a number of permissions within the AWS account.  IAM policies are sets of permissions which are attached to a human or non-human identity or AWS resource and are evaluated when a call to the resource is made.  In the traditional on-premises world, you can think of it as something somewhat similar to a set of NTFS file permissions.  When the policy pictured above is created the security principal is granted a set of permissions across all instances of CloudTrail, CloudWatch, and IAM within the account.

If you’re unfamiliar with AWS services, CloudTrail is a service which audits the API calls made to AWS resources.  Some of the information included in the events include the action taken, the resource the action was taken upon, the security principal that made the action, the date time, and source IP address of the security principal who performed the action.  The CloudWatch service allows for monitoring of metrics and optionally triggering events based upon metrics reaching specific thresholds.  The IAM service is AWS’s identity store for the cloud management layer.

Now that we have a basic understanding of the services, let’s look at the permissions Microsoft is requiring for CAS to do its thing.  The CloudTrail permissions of DescribeTrails, LookupEvents, and GetTrailStatus allow CAS to query for all trails enabled on an AWS account (CloudTrail is enabled by default on all AWS resources), lookup events in a trail, and get information about the trail such as start and stop logging times.  The CloudWatch permissions of Describe* and Get* are fancy ways of asking for  READ permissions on CloudWatch resources.  These permissions include describe-alarms-history, describe alarms, describe-alarms-for-metric, get-dashboard, and get-metric-statistics.  The IAM permissions are similar to what’s being asked for in CloudWatch, basically asking for full read.

Step number 11 instructs us to create a new CloudTrail trail.  AWS by default audits all events across all resources and stores them for 90 days.  Trails enable you to direct events captured by CloudTrail to an S3 bucket for archiving, analysis, and responding to events.

awscas2.png

The trail created is consumed by CAS to read the information captured via CloudTrail.  The permissions requested above become a bit more clear now that we see CAS is requesting read access for all trails across an account for monitoring goodness.  I’m unclear as to why CAS is asking for read for CloudWatch alarms unless it has some integration in that it monitors and reports on alarms configured for an AWS account.  The IAM read permissions are required so it can pull user  information it can use for the User Groups capability.

After the security principal is created and a sample trail is setup, it’s time to configure the connector for CAS.  Steps 12 – 15 walk through the process.  When it is complete AWS now shows as a connected app.

awscas3.png

After a few hours data will start to trickle in from AWS.  Navigating to the Users and Accounts section shows all of the accounts found in the IAM instance for my AWS account.  Envision this as your one stop shop for identifying all of the user accounts across your many cloud services.  A single pane of glass to identity across SaaS.

awscas4.png

On the Activity Log I see all of the API activities captured by CloudTrail.  If I wanted to capture more audit information, I can enable CloudTrail for the relevant resource and point it to the trail I configured for CAS.  I haven’t tested what CAS does with multiple trails, but based upon the permissions we configured when we setup the security principal, it should technically be able to pull from any trail we create.

awscas7.png

Since the CAS and AWS integration is limited to pulling logging information, lets walk through an example of how we could use the data.  Take an example where an organization has a policy that the AWS root user should not be used for administrative activities due to the level of access the account gets by default.  The organization creates AWS IAM users accounts for each of its administrators who administer the cloud management layer.  In this scenario we can create a new policy in CAS to detect and alert on instances where the AWS root user is used.

First we navigate to the Policies page under the Control section of CAS.

awscas6.png

On the Policies page we’re going to choose to create a new policy settings in the image below.  We’ll designate this as a high severity privileged account alert.  We’re interested in knowing anytime the account is used so we choose the Single Activity option.

awscas7

We’ll pretend we were smart users of CAS and let it collect data for a few weeks to get a sampling of the types of events which are captured and to give us some data to analyze.  We also went the extra mile and leveraged the ability of CAS to pull in user information from AWS IAM such that we can choose the appropriate users from the drop-down menus.

Since this is a demo and my AWS lab has a lot of activity by the root account we’re going to limit our alerts to the creation of new AWS IAM users.  To do that we set our filter to look for an Activity type equal to Create user.  Our main goal is to capture usage of the root account so we add another filter rule that searches for a User with the name equal to aws root user where it is the actor in an event.

awscas8.png

Finally we configure the alert to send an email to the administrator when the event occurs.  The governance capabilities don’t come into play in this use case.

awscas9

Next we jump back to AWS and create a new AWS IAM user named testuser1.  A few minutes after the user is created we see the event appearing in CloudTrail.

awscas10

After a few minutes, CAS generates and alert and I receive an email seen in the image below.   I’m given information as to the activity, the app, the date and time it was performed, and the client’s IP address.

awscas11.png

If I bounce back to CAS I see one new Alert.  Navigating to the alert I’m able to dismiss it, adjust the policy that generated it, or resolve it and add some notes to the resolution.

awscas12.png

I also have the option to dig deeper to see some of the patterns of the user’s behavior or the pattern of the behaviors from a specific IP address as seen below.

awscas13.png

awscas14.png

All this information is great, but what can we do with it?  In this example, it delivers visibility into the administrative activities occurring at the AWS cloud management layer by centralizing the data into a single repository which I can then send other data such as O365 activity, Box, SalesForces, etc.  By centralizing the information I can begin doing some behavioral analytics to develop standard patterns of behavior of my user base.  Understanding standard behavior patterns is key to being ahead of the bad guys whether they be insiders or outsiders.  I can search for deviations from standard patterns to detect a threat before it becomes too widespread.  I can also be proactive about putting alerts and enforcement (available for other app connectors in CAS but not AWS at this time) to stop the behavior before the threat is realized.  If I supplemented this data with log information from my on-premises proxy via Cloud App Discovery, I get an even larger sampling improving the quality of the data as well as giving me insight into shadow IT.  Pulling those “shadow” cloud solutions into the light allow me to ensure the usage of the services complies with organizational policies and opens up the opportunity of reducing costs by eliminating redundant services.

Microsoft categorizes the capabilities that help realize these benefits as the Discover and Investigate capabilities of CAS. The solution also offers a growing number of enforcement mechanisms (Microsoft categorized these enforcement mechanisms as Control) which add a whole other layer of power behind the solution.  Due to the limited integration with AWS I can’t demo those capabilities with this post.  I’ll cover those in a future post.

I hope this post helped you better understand the value that CASB/CSGs like Microsoft’s Cloud App Security can bring to the table.  While the product is still very new and a bit sparse on support with 3rd party applications, the list is growing every day. I predict the capabilities provided by technology such as Microsoft’s Cloud App Security will be as standard to IT as a firewall in the years to come.  If you’re already in Office 365 you should be ensuring you integrate these capabilities into your arsenal to understand the value they can bring to your business.

Thanks and have a wonderful week!

Deep Dive into Azure AD Domain Services – Part 2

Deep Dive into Azure AD Domain Services  – Part 2

Welcome back to part 2 of my series on Microsoft’s managed services offering of Azure Active Directory Domain Services (AAD DS).  In my first post I covered so some of the basic configuration settings of the a default service instance.  In this post I’m going to dig a bit deeper and look at network flows, what type of secure tunnels are available for LDAPS, and examine the authentication protocols and supporting cipher suites are configured for the service.

To perform these tests I leveraged a few different tools.  For a port scanner I used Zenmap.  To examine the protocols and cipher suites supported by the LDAPS service I used a custom openssl binary running on an Ubuntu VM in Azure.  For examination of the authentication protocol support I used Samba’s smbclient running on the Ubuntu VM in combination with WinSCP for file transfer, tcpdump for packet capture, and WireShark for packet analysis.

Let’s start off with examining the open ports since it takes the least amount of effort.  To do that I start up Zenmap and set the target to one of the domain controllers (DCs) IP addresses, choose the intense profile (why not?), and hit scan.  Once the scan is complete the results are displayed.

2aad1

Navigating to the Ports / Hosts tab displays the open ports. All but one of them are straight out of the standard required ports you’d see open on a Windows Server functioning as an Active Directory DC.  An opened port 443 deserves more investigation.

2aad2.png

Let’s start with the obvious and attempt to hit the IP over an HTTPS connection but no luck there.

2aad3.png

Let’s break out Fiddler and hit it again.  If we look at the first session where we build the secure tunnel to the website we see some of the details for the certificate being used to secure the session.  Opening the TextView tab of the response shows a Subject of CN=DCaaS Fleet Dc Identity Cert – 0593c62a-e713-4e56-a1be-0ef78f1a2793.  Domain Controller-as-a-Service, I like it Microsoft.  Additionally Fiddler identifies the web platform as the Microsoft HTTP Server API (HTTP.SYS).  Microsoft has been doing a lot more that API since it’s much more lightweight than IIS.  I wanted to take a closer look at the certificate so I opened the website in Dev mode in Chrome and exported it.  The EKUs are normal for a standard use certificate and it’s self-signed and untrusted on my system.  The fact that the certificate is untrusted and Microsoft isn’t rolling it out to domain-joined members tells me whatever service is running on the port isn’t for my consumption.

So what’s running on that port?  I have no idea.  The use of the HTTP Server API and a self-signed certificate with a subject specific to the managed domain service tells me it’s providing access to some type of internal management service Microsoft is using to orchestrate the managed domain controllers.  If anyone has more info on this, I’d love to hear it.

2aad4.png

Let’s now take a look at how Microsoft did at securing LDAPS connectivity to the managed domain.  LDAPS is not enabled by default in the managed domain and needs to be configured through the Azure AD Domain Services blade per these instructions.  Oddly enough Microsoft provides an option to expose LDAPS over the Internet.  Why any sane human being would ever do this, I don’t know but we’ll cover that in a later post.

I wanted to test SSLv3 and up and I didn’t want to spend time manipulating registry entries on a Windows client so I decided to spin up an Ubuntu Server 17.10 VM in Azure.  While the Ubuntu VM was spinning up, I created a certificate to be used for LDAPS using the PowerShell command referenced in the Microsoft article and enabled LDAPS through the Azure AD Domain Services resource in the Azure Portal.  I did not enable LDAPS for the Internet for these initial tests.

After adding the certificate used by LDAPS to the trusted certificate store on the Windows Server, I opened LDP.EXE and tried establishing LDAPS connection over port 636 and we get a successful connection.

2aad5.png

Once I verified the managed domain was now supporting LDAPS connections I switched over to the Ubuntu box via an SSH session.  Ubuntu removed SSLv3 support in the OpenSSL binary that comes pre-packaged with Ubuntu so to test it I needed to build another OpenSSL binary.  Thankfully some kind soul out there on the Interwebz documented how to do exactly that without overwriting the existing version.  Before I could build a new binary I had to add re-install the Make package and add the Gnu Compiler Collection (GCC) package using the two commands below.

  • sudo apt-get install –reinstall make
  • sudo apt-get install gcc

After the two packages were installed I built the new binary using the instructions in the link, tested the command, and validated the binary now includes SSLv3.

2aad6.png

After Poodle hit the news back in 2014, Microsoft along with the rest of the tech industry advised SSLv3 be disabled.  Thankfully this basic well known vulnerability has been covered and SSLv3 is disabled.

2aad7.png

SSLv3 is disabled, but what about TLS 1.0, 1.1, and 1.2?  How about the cipher suites?  Are they aligned with NIST guidance?  To test that I used a tool named TestSSLServer by Thomas Pornin.  It’s a simple command line tool which makes cycling through the available cipher suites quick and easy.

2aad8.png

The options I chose perform the following actions:

  • -all -> Perform an “exhaustive” search across cipher suites
  • -t 1 -> Space out the connections by one second
  • -min tlsv1 -> Start with TLSv1

The command produces the output below.

TLSv1.0:
server selection: enforce server preferences
3f- (key: RSA) ECDHE_RSA_WITH_AES_256_CBC_SHA
3f- (key: RSA) ECDHE_RSA_WITH_AES_128_CBC_SHA
3f- (key: RSA) DHE_RSA_WITH_AES_256_CBC_SHA
3f- (key: RSA) DHE_RSA_WITH_AES_128_CBC_SHA
3– (key: RSA) RSA_WITH_AES_256_CBC_SHA
3– (key: RSA) RSA_WITH_AES_128_CBC_SHA
3– (key: RSA) RSA_WITH_3DES_EDE_CBC_SHA
3– (key: RSA) RSA_WITH_RC4_128_SHA
3– (key: RSA) RSA_WITH_RC4_128_MD5
TLSv1.1: idem
TLSv1.2:
server selection: enforce server preferences
3f- (key: RSA) ECDHE_RSA_WITH_AES_256_CBC_SHA384
3f- (key: RSA) ECDHE_RSA_WITH_AES_128_CBC_SHA256
3f- (key: RSA) ECDHE_RSA_WITH_AES_256_CBC_SHA
3f- (key: RSA) ECDHE_RSA_WITH_AES_128_CBC_SHA
3f- (key: RSA) DHE_RSA_WITH_AES_256_GCM_SHA384
3f- (key: RSA) DHE_RSA_WITH_AES_128_GCM_SHA256
3f- (key: RSA) DHE_RSA_WITH_AES_256_CBC_SHA
3f- (key: RSA) DHE_RSA_WITH_AES_128_CBC_SHA
3– (key: RSA) RSA_WITH_AES_256_GCM_SHA384
3– (key: RSA) RSA_WITH_AES_128_GCM_SHA256
3– (key: RSA) RSA_WITH_AES_256_CBC_SHA256
3– (key: RSA) RSA_WITH_AES_128_CBC_SHA256
3– (key: RSA) RSA_WITH_AES_256_CBC_SHA
3– (key: RSA) RSA_WITH_AES_128_CBC_SHA
3– (key: RSA) RSA_WITH_3DES_EDE_CBC_SHA
3– (key: RSA) RSA_WITH_RC4_128_SHA
3– (key: RSA) RSA_WITH_RC4_128_MD5

As can be seen from the bolded output above, Microsoft is still supporting the RC4 cipher suites in the managed domain. RC4 has been known to be a vulnerable algorithm for years now and it’s disappointing to see it still supported especially since I haven’t seen any options available to disable within the managed domain. While 3DES still has a fair amount of usage, there have been documented vulnerabilities and NIST plans to disallow it for TLS in the near future. While commercial customers may be more willing to deal with the continued use of these algorithms, government entities will not.

Let’s now jump over to Kerberos and check out what cipher suites are supported by the managed DC. For that we pull up ADUC and check the msDS-SupportedEncryptionTypes attribute of the DC’s computer object. The attribute is set to a value of 28, which is the default for Windows Server 2012 R2 DCs. In ADUC we can see that this value translates to support of the following algorithms:

• RC4_HMAC_MD5
• AES128_CTS_HMAC_SHA1
• AES256_CTS_HMAC_SHA1_96

Again we see more support for RC4 which should be a big no no in the year 2018. This is a risk that orgs using AAD DS will need to live with unless Microsoft adds some options to harden the managed DCs.

Last but not least I was curious if Microsoft had support for NTLMv1. By default Windows Server 2012 R2 supports NTLMv1 due to requirements for backwards compatibility. Microsoft has long recommended disabling NTLMv1 due to the documented issues with the security of the protocol. So has Microsoft followed their own advice in the AAD DS environment?

To check this I’m going use Samba’s smbclient package on the Ubuntu VM. I’ll use smbclient to connect to the DC’s share from the Ubuntu box using the NTLM protocol. Samba has enforced the use NTLMV2 in smbclient by default so I needed to make some modifications to the global section of the smb.conf file by adding client ntlmv2 auth = no. This option disables NTLMv2 on smbclient and will force it to use NTLMv1.

2aad9.png

After saving the changes to smb.conf I exit back to the terminal and try opening a connection with smbclient. The options I used do the following:

  • -L -> List the shares on my DC’s IP address
  • -U -> My domain user name
  • -m -> Use the SMB2 protocol

2aad10.png

While I ran the command I also did a packet capture using tcpdump which I moved over to my Windows box using WinSCP.  I then opened the capture with WireShark and navigated to the packet containing the Session Setup Request.  In the parsed capture we don’t see an NTLMv2 Response which means NTLMv1 was used to authenticate to the domain controller indicating NTLMv1 is supported by the managed domain controllers.

2aad11.png

Based upon what I’ve observed from poking around and running these tests I’m fairly confident Microsoft is using a very out-of-the-box configuration for the managed Windows Active Directory domain.  There doesn’t seem to be much of an attempt to harden the domain against some of the older and well known risks.  I don’t anticipate this offering being very appealing to organizations with strong security requirements.  Microsoft should look to offer some hardening options that would be configurable through the Azure Portal.  Those hardening options are going to need to include some type of access to the logs like I mentioned in my last post.  Anyone who has tried to rid their network of insecure cipher suites or older authentication protocols knows the importance of access to the domain controller logs to the success of that type of effort.

My next post will be the final post in this series.  I’ll cover the option Microsoft provides to expose LDAPS to the Internet (WHY OH WHY WOULD YOU DO THAT?), summarize my findings, and mention a few other interesting things I came across during the study for this series.

Thanks!

Integrating Azure AD and G-Suite – Automated Provisioning

Integrating Azure AD and G-Suite – Automated Provisioning

Today I’ll wrap up my series on Azure Active Directory’s (Azure AD) integration with Google’s G-Suite.  In my first entry I covered the single-sign on (SSO) integration and in my second and third posts I gave an overview of Google’s Cloud Platform (GCP) and demonstrated how to access a G-Suite domain’s resources through Google’s APIs.  In this post I’m going to cover how Microsoft provides automated provisioning of user, groups, and contacts .  If you haven’t read through my posts on Google’s API (part 1, part 2) take a read through so you’re more familiar with the concepts I’ll be covering throughout this post.

SSO using SAML or Open ID Connect is a common capability of most every cloud solutions these days.  While that solves the authentication problem, the provisioning of users, groups, and other identity-relates objects remains a challenge largely due to the lack of widely accepted standards (SCIM has a ways to go folks).  Vendors have a variety of workarounds including making LDAP calls back to a traditional on-premises directory (YUCK), supporting uploads of CSV files, or creating and updating identities in its local databases based upon the information contained in a SAML assertion or Open ID Connect id token.  A growing number of vendors are exposing these capabilities via a web-based API.  Google falls into this category and provides a robust selection of APIs to interact with its services from Gmail to resources within Google Cloud Platform, and yes even Google G-Suite.

If you’re a frequent user of Azure AD, you’ll have run into the automatic provisioning capabilities it brings to the table across a wide range of cloud services.  In a previous series I covered its provisioning capabilities with Amazon Web Services.  This is another use case where Microsoft leverages a third party’s robust API to simplify the identity management lifecycle.

In the SSO Quickstart Guide Microsoft provides for G-Suite it erroneously states:

“Google Apps supports auto provisioning, which is by default enabled. There is no action for you in this section. If a user doesn’t already exist in Google Apps Software, a new one is created when you attempt to access Google Apps Software.”

This simply isn’t true.  While auto provisioning via the API can be done, it is a feature you need to code to and isn’t enabled by default.  When you enable SSO to G-Suite and attempt to access it using an assertion containing the claim for a user that does not exist within a G-Suite domain you receive the error below.

google4int1

This establishes what we already knew in that identities representing our users attempting SSO to G-Suite need to be created before the users can authenticate.  Microsoft provides a Quickstart for auto provisioning into G-Suite.  The document does a good job telling you were to click and giving some basic advice but really lacks in the detail into what’s happening in the background and describing how it works.

Let’s take a deeper look shall we?

If you haven’t already, add the Google Apps application from the Azure AD Application Gallery.  Once the application is added navigate to the blade for the application and select the Provisioning page.  Switch the provisioning mode from manual to automatic.

google4int2.png

Right off the bat we see a big blue Authorize button which tells us that Microsoft is not using the service accounts pattern for accessing the Google API.  Google’s recommendation is to use the service account pattern when accessing project-based data rather than user specific data.  The argument can be made that G-Suite data doesn’t fall under project-based data and the service account credential doesn’t make sense.  Additionally using a service account would require granting the account domain-wide delegation for the G-Suite domain allowing the account to impersonate any user in the G-Suite domain.  Not really ideal, especially from an auditing perspective.

By using the Server-side Web Apps pattern a new user in G-Suite can be created and assigned as the “Azure AD account”. The downfall with of this means you’re stuck paying Google $10.00 a month for a non-human account. The price of good security practices I guess.

google4int3.png

Microsoft documentation states that the account must be granted the Super Admin role. I found this surprising since you’re effectively giving the account god rights to your G-Suite domain. It got me wondering what authorization scopes is Microsoft asking for? Let’s break out Fiddler and walk through the process that kicks off after clicking on the Authorization button.

A new window pops up from Google requesting me to authenticate. Here Azure AD, acting as the OAuth client, has made an authorization request and has sent me along with the request over to the Google which is acting as the authorization server to authenticate, consent to the access, and take the next step in the authorization flow.

google4int4

When I switch over to Fiddler I see a number of sessions have been captured.  Opening the WebForms window of the first session to accounts.google.com a number of parameters that were passed to Google.

google4int5

The first parameter gives us the three authorization scopes Azure AD is looking for.  The admin.directory.group and admin.directory.user are scopes are both related to the Google Directory API, which makes sense if it wants to manage users and groups.  The /m8/feeds scope grants it access to manage contacts via the Google Contacts API.  This is an older API that uses XML instead of JSON to exchange information and looks like it has been/is being replaced by the Google People API.

Management of contacts via this API is where the requirement for an account in the Super Admin role originates.  Google documentation states that management of domain shared contacts via the /m8/feeds API requires an administrator username and password for Google Apps.  I couldn’t find any privilege in G-Suite which could be added to a custom Admin role that mentioned contacts.  Given Google’s own documentation along the lack of an obvious privilege option, this may be a hard limitation of G-Suite.  Too bad too because there are options for both Users and Groups.  Either way, the request for this authorization scope drives the requirement for Super Admin for the account Azure AD will be using for delegated access.

The redirect_uri is the where Google sends the user after the authorization request is complete.  The response_type tells us Azure AD and Google are using the OAuth authorization code grant type flow.  The client_id is the unique identifier Google has assigned to Azure AD in whatever project Microsoft has it built in.  The approval_prompt setting of force tells Google to display the consent window and the data Azure AD wants to access.  Lastly, the access_type setting of offline allows Azure AD to access the APIs without the user being available to authenticate via a refresh token which will be issued along with the access token.  Let’s pay attention to that one once the consent screen pops up.

I plug in valid super user credentials to my G-Suite domain and authenticate and receive the warning below.  This indicates that Microsoft has been naughty and hasn’t had their application reviewed by Google.  This was made a requirement back in July of 2017… so yeah… Microsoft maybe get on that?

google4int6.png

To progress to the consent screen I hit the Advanced link in the lower left and opt to continue.  The consent window then pops up.

google4int7.png

Here I see that Microsoft has registered their application with a friendly name of azure.com.  I’m also shown the scopes that the application wants to access which jive with the authorization scopes we saw in Fiddler.  Remember that offline access Microsoft asked for?  See it mentioned anywhere in this consent page that I’m delegating this access to Microsoft perpetually as long as they ask for a refresh token?  This is one of my problems with OAuth and consent windows like this.  It’s entirely too vague as to how long I’m granting the application access to my data or to do things as me.  Expect to see this OAuth consent attacks continue to grow in in use moving forward.  Why worry about compromising the user’s credentials when I can display a vague consent window and have them grant me access directly to their data?  Totally safe.

Hopping back to the window, I click the Allow button and the consent window closes.  Looking back at Fiddler I see that I received back an authorization code and posted it back to the reply_uri designated in the original authorization request.

google4int8.png

Switching back to the browser window for the Azure Portal the screen updates and the Test Connection button becomes available.  Clicking the button initiates a quick check where Azure AD obtains an access token for the scopes it requires unseen to the user.  After the successful test I hit the Save button.

google4int9.png

Switching to the browser window for the Google Admin Portal let’s take a look at the data that’s been updated for the user I used to authorize Microsoft its access.  For that I select the user, go to the Security section and I now see that the Azure Active Directory service is authorized to the contacts, user, and group management scopes.

google4int10.png

Switching back to the browser window for the Azure Portal I see some additional options are now available.

google4int11.png

The mappings are really interesting and will look familiar to you if you’ve ever done anything with an identity management tool like Microsoft Identity Manager (MIM) or even Azure AD Sync.  The user mappings for example show which attributes in Azure AD are used to populate the attributes in G-Suite.

google4int12.png

The attributes that have the Delete button grayed out are required by Google in order to provision new user accounts in a G-Suite domain.  The options available for deletion are additional data beyond what is required that Microsoft can populate on user accounts it provisions into G-Suite.  Selecting the Show advanced options button, allow you to play with the schema Microsoft is using for G-Suite.   What I found interesting about this schema is it doesn’t match the resource representation Google provides for the API.  It would have been nice to match the two to make it more consumable, but they’re probably working off values used in the old Google Provisioning API or they don’t envision many people being nerdy enough to poke around the schema.

Next up I move toggle the provisioning status from Off to On and leave the Scope option set to sync only the assigned users and groups.

google4int13.png

I then hit the Save button to save the new settings and after a minute my initial synchronization is successful.  Now nothing was synchronized, but it shows the credentials correctly allowed Azure AD to hit my G-Suite domain over the appropriate APIs with the appropriate access.

google4int14.png

So an empty synchronization works, how about one with a user?  I created a new user named dutch.schaefer@geekintheweeds.com with only the required attributes of display name and user principal name populated, assigned the new user to the Google Apps application and give Azure AD a night to run another sync.  Earlier tonight I checked the provisioning summary and verified the sync grabbed the new user.

google4int15.png

Review of the audit logs for the Google Apps application shows that the new user was exported around 11PM EST last night.  If you’re curious the synch between Azure AD and G-Suite occurs about every 20 minutes.

google4int16.png

Notice that the FamilyName and GivenName attributes are set to a period.  I never set the first or last name attributes of the user in Azure AD, so both attributes are blank.  If we bounce back to the attribute mapping and look at the attributes for Google Apps, we see that FamilyName and GivenName are both required meaning Azure AD had to populate them with something.  Different schemas, different requirements.

google4int17

Switching over to the Google Admin Console I see that the new user was successfully provisioned into G-Suite.

google4int18.png

Pretty neat overall.  Let’s take a look at what we learned:

  • Azure AD supports single sign-on to G-Suite via SAML using a service provider-initiated flow where Azure AD acts as the identity provider and G-Suite acts as the service provider.
  • A user object with a login id matching the user’s login id in Azure Active Directory must be created in G-Suite before single sign-on will work.
  • Google provides a number of libraries for its API and the Google API Explorer should be used for experimentation with Google’s APIs.
  • Google’s Directory API is used by Azure AD to provision users and groups into a G-Suite domain.
  • Google’s Contacts API is used by Azure AD to provision contacts into a G-Suite domain.
  • A user holding the Super Admin role in the G-Suite domain must be used to authorize Azure AD to perform provisioning activities.  The Super Admin role is required due to the usage of the Google Contact API.
  • Azure AD’s authorization request includes offline access using refresh tokens to request additional access tokens to ensure the sync process can be run on a regular basis without requiring re-authorization.
  • Best practice is to dedicate a user account in your G-Suite domain to Azure AD.
  • Azure AD uses the Server-side Web pattern for accessing Google’s APIs.
  • The provisioning process will populate a period for any attribute that is required in G-Suite but does not have a value in the corresponding attribute in Azure AD.
  • The provisioning process runs a sync every 20 minutes.

Even though my coding is horrendous, I absolutely loved experimenting with the Google API.  It’s easy to realize why APIs are becoming so critical to a good solution.  With the increased usage of a wide variety of products in a business, being able to plug and play applications is a must.  The provisioning aspect Azure AD demonstrates here is a great example of the opportunities provided when critical functionality is exposed for programmatic access.

I hope you enjoyed the series, learned a bit more about both solutions, and got some insight into what’s going on behind the scenes.

 

Integrating Azure AD and G-Suite – Google API Integration Part 1

Integrating Azure AD and G-Suite – Google API Integration Part 1

Hi everyone,

Welcome to the second post in my series on the integration between Azure Active Directory (Azure AD) and Google’s G-Suite (formally named Google Apps).  In my first entry I covered the single sign-on (SSO) integration between the two solutions.  This included a brief walkthrough of the configuration and an explanation of how the SAML protocol is used by both solutions to accomplish the SSO user experience.  I encourage you to read through that post before you jump into this.

So we have single sign-on between Azure AD and G-Suite, but do we still need to provision the users and group into G-Suite?  Thanks to Google’s Directory Application Programming Interface (API) and Azure Active Directory’s (Azure AD) integration with it, we can get automatic provisioning into G-Suite .  Before I cover how that integration works, let’s take a deeper look at Google’s Cloud Platform (GCP) and its API.

Like many of the modern APIs out there today, Google’s API is web-based and robust. It was built on Google’s JavaScript Object Notation (JSON)-based API infrastructure and uses Open Authorization 2.0 (OAuth 2.0) to allow for delegated access to an entities resources stored in Google. It’s nice to see vendors like Microsoft and Google leveraging standard protocols for interaction with the APIs unlike some vendors… *cough* Amazon *cough*. Google provides software development kits (SDKs) and shared libraries for a variety of languages.

Let’s take a look at the API Explorer.  The API Explorer is a great way to play around with the API without the need to write any code and to get an idea of the inputs and outputs of specific API calls.  I’m first going to do something very basic and retrieve a listing of users in my G-Suite directory.  Once I access the API explorer I hit the All Versions menu item and select the Admin Directory API.

google1int1

On the next screen I navigate down to the directory.users.list method and select it.  On the screen that follows I’m provided with a variety of input fields.  The data I input into these fields will affect what data is returned to me from the API. I put the domain name associated with my G-Suite subscription and hit the Authorize and Execute button.  A new window pops up which allows me to configure which scope of access I want to grant the API Explorer.  I’m going to give it just the scope of https://www.googleapis.com/auth/admin.directory.user.readonly.

google1int2

I then hit the Authorize and Execute button and I’m prompted to authenticate to Google and delegate API Explorer to access data I have permission to access in my G-Suite description.   Here I plug in the username and password for a standard user who isn’t assigned to any G-Suite Admin Roles.

google1int3

After successfully authenticating, I’m then prompted for consent to delegate API Explorer to view the users configured in the user’s G-Suite directory.

google1int4

I hit the allow button, the request for delegated access is complete, and a listing of users within my G-Suite directory are returned in JSON format.

google1int5

Easy right?  How about we step it up a notch and create a new user.  For that operation I’ll be delegating access to API Explorer using an account which has been granted the G-Suite User Management Admin role.  I navigate back to the main list of methods and choose the directory.users.insert method.  I then plug in the required values and hit the Authorize and Execute button.  The scopes menu pops up and I choose the https://www.googleapis.com/auth/admin.directory.user scope to allow for provisioning of the user and then hit the Authorize and Execute button.  The request is made and a successful response is returned.

google1int6

Navigating back to G-Suite and looking at the listing of users shows the new user Marge Simpson as appearing as created.

google1int7

Now that we’ve seen some simple samples using API Explorer let’s talk a bit about how you go about registering an application to interact with Google’s API as well as covering some basic Google Cloud Platform (GCP) concepts.

First thing I’m going to do is navigate to Google’s Getting Started page and create a new project.  So what is a project?  This took a bit of reading on my part because my prior experience with GCP was non-existent.  Think of a Google Project like an Amazon Web Services (AWS) account or a Microsoft Azure Subscription.  It acts as a container for billing, reporting, and organization of GCP resources.  Projects can be associated with a Google Cloud Organization (similar to how multiple Azure subscriptions can be associated with a single Microsoft Azure Active Directory (Azure AD) tenant) which is a resource available for a G-Suite subscription or Google Cloud Identity resource.  The picture below shows the organization associated with my G-Suite subscription.

google1int8

Now that we have the concepts out of the way, let’s get back to the demo. Back at the Getting Started page, I click the Create a new project button and authenticate as the super admin for my G-Suite subscription. I’ll explain why I’m using a super admin later. On the next screen I name the project JOG-NET-CONSOLE and hit the next button.

google1int9

The next screen prompts me to provide a name which will be displayed to the user when the user is prompted for consent in the instance I decide to use an OAuth flow which requires user consent.

google1int10

Next up I’m prompted to specify what type of application I’m integrated with Google.  For this demonstration I’ll be creating a simple console app, so I’m going to choose Web Browser simply to move forward.  I plug in a random unique value and click Create.

google1int11

After creation is successful, I’m prompted to download the client configuration and provided with my Client ID and Client Secret. The configuration file is in JSON format that provides information about the client’s registration and information on authorization server (Google’s) OAuth endpoints. This file can be consumed directly by the Google API libraries when obtaining credentials if you’re going that route.

google1int12

For the demo application I’m building I’ll be using the service account scenario often used for server-to-server interactions. This scenario leverages the OAuth 2.0 Client Credentials Authorization Grant flow. No user consent is required for this scenario because the intention of the service account scenario is it to access its own data. Google also provides the capability for the service account to be delegated the right to impersonate users within a G-Suite subscription. I’ll be using that capability for this demonstration.

Back to the demo…

Now that my application is registered, I now need to generate credentials I can use for the service account scenario. For that I navigate to the Google API Console. After successfully authenticating, I’ll be brought to the dashboard for the application project I created in the previous steps. On this page I’ll click the Credentials menu item.

google1int13

The credentials screen displays the client IDs associated with the JOG-NET-CONSOLE project.  Here we see the client ID I received in the JSON file as well as a default one Google generated when I created the project.

google1int14

Next up I click the Create Credentials button and select the Service Account key option.  On the Create service account key page I provide a unique name for the service account of Jog Directory Access.

The Role drop down box relates to the new roles that were introduced with Google’s Cloud IAM.

You can think of Google’s Cloud IAM as Google’s version of Amazon Web Services (AWS) IAM  or Microsoft’s Azure Active Directory in how the instance is related to the project which is used to manage the GCP resources.  When a new service account is created a new security principal representing the non-human identity is created in the Google Cloud IAM instance backing the project.

Since my application won’t be interacting with GCP resources, I’ll choose the random role of Logs Viewer.  When I filled in the service account name the service account ID field was automatically populated for me with a value.  The service account ID is unique to the project and represents the security principal for the application.   I choose the option to download the private key as a PKCS12 file because I’ll be using the System.Security.Cryptography.X509Certificates namespace within my application later on.  Finally I click the create button and download the PKCS12 file.

google1int15

The new service account now shows in the credential page.

google1int16

 

Navigating to the IAM & Admin dashboard now shows the application as a security principal within the project.

google1int17

I now need to enable the APIs in my project that I want my applications to access.  For this I navigate to the API & Services dashboard and click the Enable APIs and Services link.

google1int23.png

On the next page I use “admin” as my search term, select the Admin SDK and click the Enable button.  The API is now enabled for applications within the project.

From here I navigate down to the Service accounts page and edit the newly created service account.

google1int19

At this point I’ve created a new project in GCP, created a service account that will represent the demo application, and have given that application the right to impersonate users in my G-Suite directory.  I now need to authorize the application to access the G-Suite’s data via Google’s API.  For that I switch over to the G-Suite Admin Console and authenticate as a super admin and access the Security dashboard.  From there I hit the Advanced Setting option and click the Manage API client access link.

google1int20

On the Manage API client access page I add a new entry using the client ID I pulled previously and granting the application access to the  https://www.googleapis.com/auth/admin.directory.user.readonly scope.  This allows the application to impersonate a user to pull a listing of users from the G-Suite directory.

google1int21

Whew, a lot of new concepts to digest in this entry so I’ll save the review of the application for the next entry.  Here’s a consumable diagram I put together showing the relationship between GCP Projects, G-Suite, and a GCP Organization.  The G-Suite domain acts as a link to the GCP projects.  The G-Suite users can setup GCP projects and have a stub identity (see my first entry LINK) provisioned in the project.  When an service account is created in a project and granted G-Suite Domain-wide Delegation, we use the Client ID associated with the service account to establish an identity for the app in the G-Suite domain which is associated with a scope of authorized access.

google1int22

In this post I covered some basic GCP concepts and saw that the concepts are very similar to both Microsoft and AWS.  I also covered the process to create a service account in GCP and how all the pieces come together to programmatic access to G-Suite resources.  In my next entry I’ll demo some simple .NET applications and walk through the code.

Have a great weekend and go Pats!

Integrating Azure AD and G-Suite – Single Sign-On

Integrating Azure AD and G-Suite – Single Sign-On

Hi everyone,

After working through the Azure Active Directory (AD) and Amazon Web Services (AWS) integration I thought it’d be fun to do the same thing with Google Apps.  Google provides a generic tutorial for single sign-on that is severely lacking in details.  Microsoft again provides a reasonable tutorial for integrating Azure AD and Google Apps for single sign-on.  Neither gives much detail about what goes on behind the scenes or provides the geeky details us technology folk love.  Where there is a lack of detail there is a blogging opportunity for Journey Of The Geek.

In my previous post I covered the benefits of introducing Azure AD as an Identity-as-a-Service (IDaaS) component to Software-as-a-Service (SaaS) integrations.  Read the post for full details but the short of it is the integration gives you value-added features such as multifactor authentication with Azure Multifactor Authentication (MFA), adaptive authentication with Azure AD Identity Protection, contextual authorization with Azure AD Conditional Access, and cloud access security broker (CASB) functionality through Cloud App Security.  Supplementing Google Apps with these additional capabilities improves visibility, security, and user experience.  Wins across the board, right?

I’m going to break the integration into a series of posts with the first focusing on single sign-on (SSO).  I’ll follow up with a post exploring the provisioning capabilities Azure AD introduces as well as playing around with Google’s API.  In a future post I’ll demonstrate what Cloud App Security can bring to the picture.

Let’s move ahead with the post, shall we?

The first thing I did was to add the Google Apps application to Azure AD through the Azure AD blade in the Azure Portal. Once the application was added successfully I navigated to the Single sign-on section of the configuration. Navigate to the SAML Signing Certification section and click the link to download the certificate. This is the certificate Azure AD will be using to sign the SAML assertions it generates for the SAML trust. Save this file because we’ll need it for the next step.

I next signed up for trial subscription of Google’s G Suite Business. This plan comes with a identity store, email, cloud storage, the Google productivity suite, and a variety of other tools and features. Sign up is straightforward so I won’t be covering it. After logging into the Google Admin Console as my newly minted administrator the main menu is displayed. From here I select the Security option.googlesso1

Once the Security page loads, I select the Set up single sign-on (SSO) menu to expand the option.  Google will be playing the role of the service provider, so I’ll be configuring the second section.  Check the box to choose to Setup SSO with third party identity provider.  Next up you’ll need to identify what your specific SAML2 endpoint is for your tenant.  The Microsoft article still references the endpoint used with the old login experience that was recently replaced.  You’ll instead want to use the endpoint https://login.microsoftonline.com/<tenantID>/saml2You’ll populate that endpoint for both the Sign-In and Sign-Out URLs.  I opted to choose the domain specific issuer option which sets the identifier Google identifies itself as in the SAML authentication request to include the domain name associated with the Google Apps account.  You would typically use this if you had multiple subscriptions of Google Apps using the same identity provider.  The final step is upload the certificate you downloaded from Azure AD.  At this point Google configured to redirect users accessing Google Apps (exempting the Admin Console) to Azure AD to authenticate.

googlesso2

Now that Google is configured, we need to finish the configuration on Azure AD’s end.  If you follow the Microsoft tutorial at this point you’re going to run into some issues.  In the previous step I opted to use a domain specific issuer, so I’ll need to set the identifier to google.com/a/geekintheweeds.com.  For the user identifier I’ll leave the default as the user’s user principal name since it will match the user’s identifier in Google.  I also remove the additional attributes Azure AD sends by default since Google will discard them anyway.  Once the settings are configured hit the Save button.

googlesso3

Now that both the IdP and SP have been created, it’s time to create a user in Google App to represent my user that will be coming from Azure AD.  I refer to this as a “stub user” as it is a record that represents my user who lives authoritatively in Azure Active Directory.    For that I switch back to the Google Admin console, click the User’s button, and click the button to create a new user.

googlesso4

Earlier I created a new user in Azure AD named Michael Walsh that has a login ID of michael.walsh@geekintheweeds.com. Since I’ll be passing the user’s user principal name (UPN) from Azure AD, I’ll need to set the user’s Google login name to match the user’s UPN.

googlesso5

I then hit the Create button and my new user is created.  You’ll need that Google assigns the user a temporary password.  Like many SaaS solutions Google maintains a credential associated with the user even when the user is configured to use SSO via SAML.  Our SP and IdP are configured and the stub user is created in Google, so we’re good to test it out.

googlesso6

I open up Edge and navigate to the Google Apps login page, type in my username, and click the Next button.

googlesso7

I’m then redirect to the Microsoft login page where I authenticate using my Azure AD credentials and hit the sign in button.

googlesso8

After successfully authenticating to Azure AD, I’m redirected back to Google and logged in to my newly created account.

googlesso9

So what happened in the background to make the magic happen?  Let’s take a look at a diagram and break down the Fiddler conversation.

googlesso10

The diagram above outlines the simple steps used to achieve the user experience.  First the user navigates to the Google login page (remember SP-initiated SSO), enters his or her username, and is sent back an authentication request seen below extracted from Fiddler with instructs to deliver it back to the Azure AD endpoint for our tenant.

googlesso11

googlesso12.png

The user then authenticates to Azure AD and receives back a SAML response with instructions to deliver it back to Google. The user’s browser posts the SAML assertion to the Google endpoint and the user is successfully authenticated to Google.

googlesso13.png

googlesso14.png

Simple right?  In comparison to the AWS integration from an SSO-perspective, this was much more straightforward.  Unlike the AWS integration, it is required to have a stub user for the user in Google Apps prior to using SSO.  This means there is some provisioning work to perform… or does it?  Azure AD’s integration again offers some degree of “provisioning”.  In my next post I’ll explore those capabilities and perform some simple actions inside Google’s API.

See you next post!

Office 365 Group Naming Policies – Part 2

Office 365 Group Naming Policies – Part 2

Welcome back.

 

In my first post I covered some of the methods organizations use to enforce a naming standard for groups, such as Active Directory groups, that are used to authorize access to data.  I also covered the challenges that are introduced when a mechanism for enforcing the naming standard doesn’t exist or isn’t effective and how this problem is becoming more prevalent with the increase in consumption of software-as-a-service applications.

Office 365 Groups are a core foundational component of Office 365 helping to enable simple, fast, and efficient collaboration within an organization.  For an organization to take full advantage of this, its end users need to be empowered to spin up Office 365 groups for mail-based collaboration or create new Teams for real-time collaboration. To make this work, IT can’t get in the way of the business and needs to let the business spin up and down Office 365 Groups as it needs them.  Microsoft has introduced a number of solutions to help with this including group expiration, integration with Office 365 retention policies, and the feature I’ll cover today, naming policies.

The naming policy feature is still in private preview but today I’m going to show you how to test the feature in your own tenant.  As a point of reference, I’m using a set of trial O365 E5 and Azure AD Premium P2 licenses within the commercial offering of Office 365 for my testing.  I can’t speak to whether or not the instructions below will work for Office 365 GCC or Office 365 Government.

The first thing I needed to do was install the Azure AD Preview module.  To do this I had to first remove the existing Azure AD module I had installed on my system and then install the Azure AD Preview module as seen below.

o365-1

Comparing the modules using a get-command shows that the preview module has the new cmdlets below.

Add-AzureADAdministrativeUnitMember
Add-AzureADApplicationPolicy
Add-AzureADMSLifecyclePolicyGroup
Add-AzureADScopedRoleMembership
Add-AzureADServicePrincipalPolicy
Get-AzureADAdministrativeUnit
Get-AzureADAdministrativeUnitMember
Get-AzureADApplicationPolicy
Get-AzureADDirectorySetting
Get-AzureADDirectorySettingTemplate
Get-AzureADMSDeletedDirectoryObject
Get-AzureADMSDeletedGroup
Get-AzureADMSGroup
Get-AzureADMSGroupLifecyclePolicy
Get-AzureADMSLifecyclePolicyGroup
Get-AzureADObjectSetting
Get-AzureADPolicy
Get-AzureADPolicyAppliedObject
Get-AzureADScopedRoleMembership
Get-AzureADServicePrincipalPolicy
New-AzureADAdministrativeUnit
New-AzureADDirectorySetting
New-AzureADMSGroup
New-AzureADMSGroupLifecyclePolicy
New-AzureADObjectSetting
New-AzureADPolicy
Remove-AzureADAdministrativeUnit
Remove-AzureADAdministrativeUnitMember
Remove-AzureADApplicationPolicy
Remove-AzureADDirectorySetting
Remove-AzureADMSDeletedDirectoryObject
Remove-AzureADMSGroup
Remove-AzureADMSGroupLifecyclePolicy
Remove-AzureADMSLifecyclePolicyGroup
Remove-AzureADObjectSetting
Remove-AzureADPolicy
Remove-AzureADScopedRoleMembership
Remove-AzureADServicePrincipalPolicy
Reset-AzureADMSLifeCycleGroup
Restore-AzureADMSDeletedDirectoryObject
Set-AzureADAdministrativeUnit
Set-AzureADDirectorySetting
Set-AzureADMSGroup
Set-AzureADMSGroupLifecyclePolicy
Set-AzureADObjectSetting
Set-AzureADPolicy

The cmdlets we’re interested in for this demonstration are the used to create and manage a new Graph API resource type called a directorySetting. The resource type is used to configure settings within Azure Active Directory. The directorySetting resource types are created from a template of configuration settings called a directorySettingTemplate resource type. Running the cmdlet Get-AzureADDirectorySettingTemplate displays the available to build a custom directorySetting from.

o365-2.png

After connection to Azure AD using the Connect-AzureAD cmdlet, I can take a look at the templates available. The template I’m interested in for this blog is the Group.Unified template because it contains the settings for the naming policy as seen below.

o365-3.png

Now that I’ve identified the template I want to draw from for a new directorySetting, I’m going to create a variable named $template and assign the Group.Unified template to it.  Running a quick Get-Member on the newly assigned displays a method named CreateDirectorySetting.  I’ll use this method to create a new instance of a directorySetting resource type based off the template and assign it to a variable named $setting.

o365-4.png

If I run a Get-Member on $setting I can see that I’ve created a new instance of the directorySetting resource type which has the settings inherited from the Group.Unified template with some of those settings being configured with default values.

o365-5.png

You’ll want to pay attention to these default values because once the settings become active for the tenant and seem to override settings configured within the GUI.  For example, if you are denying users the ability to create new Office 365 groups via the configuration setting in the Azure Active Directory blade in the Azure Portal, leaving the EnableGroupCreation setting as true will override that.  I’m not sure that is the intended behavior, but hey this is still preview right?

The next step is to configure the PrefixSuffixNamingRequirement setting with the naming convention I want enforced across my tenant.  This Microsoft article does a good job explaining your options and the syntax.  I went with a simple naming convention of including the fixed string “JOG” along with the value from the user’s department attribute in Azure Active Directory followed by the string value the user chooses for the group name.

o365-6.png

Checking the values property of the $setting shows that the PrefixSuffixNamingRequirement is now populated with the value I entered above.

o365-7.png

Now that the settings has been configured I make it active by using the New-AzureADDirectorySetting cmdlet and including the $setting directory object as input.

o365-8.png

I then log into the Office 365 portal as a standard user and navigate to Outlook Web App and attempt to create a new Office 365 group. All new groups are now created using the naming convention I defined and it’s displayed clearly to the end users.

o365-9.png

Hopefully Microsoft will refine the documentation as the feature moves out of preview and into general availability.  I also think this is a simple and static setting that would make sense to configurable from the GUI.  I’d also like to see the settings configurable with the directorySetting resource type be in sync with any corresponding settings in the GUI to avoid confusion.

That’s all there is to it.  Overall it’s a very simple yet elegant solution that solves naming convention woes while giving the business freedom to collaborate without having to go through IT.  You can’t beat that.

Thanks!