AWS and Microsoft’s Cloud App Security

AWS and Microsoft’s Cloud App Security

It seems like it’s become a weekly occurrence to have sensitive data exposed due to poorly managed cloud services.  Due to Amazon’s large market share with Amazon Web Services (AWS) many of these instances involve publicly-accessible Simple Storage Service (S3) buckets.  In the last six months alone there were highly publicized incidents with FedEx and Verizon.  While the cloud can be empowering, it can also be very dangerous when there is a lack of governance, visibility, and acceptance of the different security mindset cloud requires.

Organizations that have been in operation for many years have grown to be very reliant on the network boundary acting as the primary security boundary.  As these organizations begin to move to a software defined data center model this traditional boundary quickly becomes less and less effective.  Unfortunately for these organizations this, in combination with a lack of sufficient understanding of cloud, gives rise to mistakes like sensitive data being exposed.

One way in which an organization can protect itself is to leverage technologies such as cloud access security brokers (cloud security gateways if you’re Forrester reader) to help monitor and control data as it moves travels between on-premises and the cloud.  If you’re unfamiliar with the concept of a CASB, I covered it in a previous entry and included a link to an article which provides a great overview.

Microsoft has its own CASB offering called Microsoft Cloud App Security (CAS).  It’s offered as part of Microsoft’s Enterprise Mobility and Security (EMS) E5/A5 subscription.  Over the past several months multiple connectors to third party software-as-a-service (SaaS) providers have been introduced, including one for AWS.  The capabilities with AWS are limited at this point to pulling administrative logs and user information but it shows promise.

As per usual, Microsoft provides an integration guide which is detailed in button pushing, but not so much in concepts and technical details as to what is happening behind the scenes.  Since the Azure AD and AWS blog series has attracted so many views, I thought it would be fun and informative to do an entry for how Cloud App Security can be used with AWS.

I’m not in the habit of re-creating documentation so I’ll be referencing the Microsoft integration guide throughout the post.

The first thing that needs doing is the creation of a security principal in AWS Identity and Access Management (AWS IAM) that will be used by your tenant’s instance of CAS to connect to resources in your AWS account.   The first four steps are straightforward but step 5 could a bit of an explanation.

awscas1.pngHere we’re creating a custom IAM policy for the security principal granting it a number of permissions within the AWS account.  IAM policies are sets of permissions which are attached to a human or non-human identity or AWS resource and are evaluated when a call to the resource is made.  In the traditional on-premises world, you can think of it as something somewhat similar to a set of NTFS file permissions.  When the policy pictured above is created the security principal is granted a set of permissions across all instances of CloudTrail, CloudWatch, and IAM within the account.

If you’re unfamiliar with AWS services, CloudTrail is a service which audits the API calls made to AWS resources.  Some of the information included in the events include the action taken, the resource the action was taken upon, the security principal that made the action, the date time, and source IP address of the security principal who performed the action.  The CloudWatch service allows for monitoring of metrics and optionally triggering events based upon metrics reaching specific thresholds.  The IAM service is AWS’s identity store for the cloud management layer.

Now that we have a basic understanding of the services, let’s look at the permissions Microsoft is requiring for CAS to do its thing.  The CloudTrail permissions of DescribeTrails, LookupEvents, and GetTrailStatus allow CAS to query for all trails enabled on an AWS account (CloudTrail is enabled by default on all AWS resources), lookup events in a trail, and get information about the trail such as start and stop logging times.  The CloudWatch permissions of Describe* and Get* are fancy ways of asking for  READ permissions on CloudWatch resources.  These permissions include describe-alarms-history, describe alarms, describe-alarms-for-metric, get-dashboard, and get-metric-statistics.  The IAM permissions are similar to what’s being asked for in CloudWatch, basically asking for full read.

Step number 11 instructs us to create a new CloudTrail trail.  AWS by default audits all events across all resources and stores them for 90 days.  Trails enable you to direct events captured by CloudTrail to an S3 bucket for archiving, analysis, and responding to events.


The trail created is consumed by CAS to read the information captured via CloudTrail.  The permissions requested above become a bit more clear now that we see CAS is requesting read access for all trails across an account for monitoring goodness.  I’m unclear as to why CAS is asking for read for CloudWatch alarms unless it has some integration in that it monitors and reports on alarms configured for an AWS account.  The IAM read permissions are required so it can pull user  information it can use for the User Groups capability.

After the security principal is created and a sample trail is setup, it’s time to configure the connector for CAS.  Steps 12 – 15 walk through the process.  When it is complete AWS now shows as a connected app.


After a few hours data will start to trickle in from AWS.  Navigating to the Users and Accounts section shows all of the accounts found in the IAM instance for my AWS account.  Envision this as your one stop shop for identifying all of the user accounts across your many cloud services.  A single pane of glass to identity across SaaS.


On the Activity Log I see all of the API activities captured by CloudTrail.  If I wanted to capture more audit information, I can enable CloudTrail for the relevant resource and point it to the trail I configured for CAS.  I haven’t tested what CAS does with multiple trails, but based upon the permissions we configured when we setup the security principal, it should technically be able to pull from any trail we create.


Since the CAS and AWS integration is limited to pulling logging information, lets walk through an example of how we could use the data.  Take an example where an organization has a policy that the AWS root user should not be used for administrative activities due to the level of access the account gets by default.  The organization creates AWS IAM users accounts for each of its administrators who administer the cloud management layer.  In this scenario we can create a new policy in CAS to detect and alert on instances where the AWS root user is used.

First we navigate to the Policies page under the Control section of CAS.


On the Policies page we’re going to choose to create a new policy settings in the image below.  We’ll designate this as a high severity privileged account alert.  We’re interested in knowing anytime the account is used so we choose the Single Activity option.


We’ll pretend we were smart users of CAS and let it collect data for a few weeks to get a sampling of the types of events which are captured and to give us some data to analyze.  We also went the extra mile and leveraged the ability of CAS to pull in user information from AWS IAM such that we can choose the appropriate users from the drop-down menus.

Since this is a demo and my AWS lab has a lot of activity by the root account we’re going to limit our alerts to the creation of new AWS IAM users.  To do that we set our filter to look for an Activity type equal to Create user.  Our main goal is to capture usage of the root account so we add another filter rule that searches for a User with the name equal to aws root user where it is the actor in an event.


Finally we configure the alert to send an email to the administrator when the event occurs.  The governance capabilities don’t come into play in this use case.


Next we jump back to AWS and create a new AWS IAM user named testuser1.  A few minutes after the user is created we see the event appearing in CloudTrail.


After a few minutes, CAS generates and alert and I receive an email seen in the image below.   I’m given information as to the activity, the app, the date and time it was performed, and the client’s IP address.


If I bounce back to CAS I see one new Alert.  Navigating to the alert I’m able to dismiss it, adjust the policy that generated it, or resolve it and add some notes to the resolution.


I also have the option to dig deeper to see some of the patterns of the user’s behavior or the pattern of the behaviors from a specific IP address as seen below.



All this information is great, but what can we do with it?  In this example, it delivers visibility into the administrative activities occurring at the AWS cloud management layer by centralizing the data into a single repository which I can then send other data such as O365 activity, Box, SalesForces, etc.  By centralizing the information I can begin doing some behavioral analytics to develop standard patterns of behavior of my user base.  Understanding standard behavior patterns is key to being ahead of the bad guys whether they be insiders or outsiders.  I can search for deviations from standard patterns to detect a threat before it becomes too widespread.  I can also be proactive about putting alerts and enforcement (available for other app connectors in CAS but not AWS at this time) to stop the behavior before the threat is realized.  If I supplemented this data with log information from my on-premises proxy via Cloud App Discovery, I get an even larger sampling improving the quality of the data as well as giving me insight into shadow IT.  Pulling those “shadow” cloud solutions into the light allow me to ensure the usage of the services complies with organizational policies and opens up the opportunity of reducing costs by eliminating redundant services.

Microsoft categorizes the capabilities that help realize these benefits as the Discover and Investigate capabilities of CAS. The solution also offers a growing number of enforcement mechanisms (Microsoft categorized these enforcement mechanisms as Control) which add a whole other layer of power behind the solution.  Due to the limited integration with AWS I can’t demo those capabilities with this post.  I’ll cover those in a future post.

I hope this post helped you better understand the value that CASB/CSGs like Microsoft’s Cloud App Security can bring to the table.  While the product is still very new and a bit sparse on support with 3rd party applications, the list is growing every day. I predict the capabilities provided by technology such as Microsoft’s Cloud App Security will be as standard to IT as a firewall in the years to come.  If you’re already in Office 365 you should be ensuring you integrate these capabilities into your arsenal to understand the value they can bring to your business.

Thanks and have a wonderful week!

Deep Dive into Azure AD Domain Services – Part 3

Deep Dive into Azure AD Domain Services  – Part 3

Well folks, it’s time to wrap up this series on Azure Active Directory Domain Services (AAD DS).  In my first post I covered the basic configurations of the managed domain and in my second post took a look at how well Microsoft did in applying security best practices and complying with NIST standards.  In this post I’m going to briefly cover the LDAPS over the Internet capability, summarize some key findings, and list out some improvements I’d like to see made to the service.

One of the odd features Microsoft provides with the AAD DS service is the ability to expose the managed domain over LDAPS to the Internet.  I really am lost as to the use case that drove the feature.  LDAP is very much a legacy on-premises protocol that has no place being exposed to risks of the public Internet.  It’s the last thing that should the industry should be encouraging.  Just because you can, doesn’t mean you should.   Now let me step off the soap box and let’s take a look at the feature.

As I covered in my last post LDAPS is not natively enabled in the managed domain.  The feature must be configured and enabled through the Azure Portal.  The configuration consists of uploading the private key and certificate the service will use in the form of a PKCS12 file (*.PFX).  The certificate has a few requirements that are outlined in the instructions above.  After the certificate is validated, it takes about 10-15 minutes for the service to become available.  Beyond enabling the service within the VNet, you additionally have the option to expose the LDAPS endpoint to the Internet.


Microsoft provides instructions on how to restrict access to the endpoint to trusted IPs via a network security group (NSG) because yeah, exposing an LDAP endpoint to the Internet is just a tad risky.  To lock it down you simply associate an NSG with the subnet AAD DS is serving.  Once that is done enable the service via the option in the image above and wait about 10 minutes.  After the service is up, register a external DNS record for the service that points to the IP address noted under the properties section of the AADS blade and you’re good to go.

For my testing purposes, I locked the external LDAPS endpoint down to the public IP address my Azure VM was SNATed to.  After that I created an entry in the host file of the VM that matched the external DNS name I gave the service ( to the public IP address of the LDAPS endpoint in order to bypass the split-brain DNS challenge.  Initiating a connection from LDP.EXE was a success.


Now that we know the service is running, let’s check out what the protocol support and cipher suite looks like.


Again we see the use of deprecated cipher suites. Here the risk is that much greater since a small mistake with an NSG could expose this endpoint directly to the Internet.  If you’re going to use this feature, please just don’t.  If you’re really determined to, don’t screw up your NSGs.

This series was probably one of the more enjoyable series I’ve done since I knew very little about the AAD DS offering. There were a few key takeaways that are worth sharing:

  • The more objects in the directory, the more expensive the service.
  • Users and groups can be created directly in managed domain after a new organizational unit is created.
  • Password and lockout policy is insanely loose to the point where I can create an account with a three character password (just need to meet complexity requirements) and accounts never lockout.  The policy cannot be changed.
  • RC4 encryption ciphers are enabled and cannot be disabled.
  • NTLMv1 is enabled and cannot be disabled.
  • The service does not support smart-card enforced users.  Yes, that includes both the users synchronized from Azure AD as well as any users you create directly in the managed domain.  If I had to guess, it’s probably due to the fact that you’re not a Domain Admin so hence you can’t add to the NTAuth certificate store.
  • LDAPS is not enabled by default.
  • Schema extensions are not supported.
  • Account-Based Kerberos Delegation is not supported.
  • If you are syncing identities to Azure AD, you’ll also need to synchronize your passwords.
  • The managed domain is very much “out of the box” defaults.
  • Microsoft creates a “god” account which is a permanent member of every privileged group in the forest
  • Recovery of deleted objects created directly in the managed domain is not possible.  The rights have not been delegated to the AADC Administrator.
  • The service does not allow for Active Directory trusts
  • SIDHistory attribute of users and groups sourced from Azure AD is populated with Primary Group from on-premises domain

My verdict on AAD DS is it’s not a very useful service in its current state.  Beyond small organizations, organizations that have very little to no requirements on legacy infrastructure, organizations that don’t have strong security requirements, and dev/qa purposes I don’t see much of a use for it right now.  It comes off as a service in its infancy that has a lot of room to grow and mature.  Microsoft has gone a bit too far in the standardization/simplicity direction and needs to shift a bit in the opposite direction by allowing for more customization, especially in regards to security.

I’d really like to see Microsoft introduce the capabilities below.   All of them should  exposed via the resource blade in the Azure Portal if at all possible.  It would provide a singular administration point (which seems to be the strategy given the move of Azure AD and Intune to the Azure Portal) and would allow Microsoft to control how the options are enabled in the managed domain.  This means no more administrators blowing up their Active Directory forest because they accidentally shut off all the supported cipher suites for Kerberos.

  • Expose Domain Controller Event Logs to Azure Portal/Graph API and add support for AAD DS Power BI Dashboards
  • Support for Active Directory trusts
  • Out of the box provide a Red Forest model (get rid of that “god” account)
  • Option to disable risky cipher suites for both Kerberos and LDAPS
  • Option to harden the password and lockout policy
  • Option to disable NTLMv1
  • Option to turn on LDAP Debug Logging
  • Option to direct Domain Controller event logs to a SIEM
  • Option to restore deleted users and groups that were created directly in the managed domain.  If you’re allow creation, you need to allow for restoration.
  • Removal of Internet-accessible LDAPS endpoint feature or at least somehow incorporate the NSG lockdown feature directly into the AAD DS blade.

While the service has a lot of room for improvement the direction of a managed Windows AD offering is spot on.  In the year 2018, there is no reason Windows AD shouldn’t be offered as a managed service.  The direction Microsoft has gone by sourcing the identities and credentials from Azure AD is especially creative.  It’s a solid step in the direction of creating a singular centralized identity service that provides both legacy and modern protocols.  I’ll be watching this service closely as Microsoft builds upon it for the next few months.

Thanks and see you next post!

Deep Dive into Azure AD Domain Services – Part 2

Deep Dive into Azure AD Domain Services  – Part 2

Welcome back to part 2 of my series on Microsoft’s managed services offering of Azure Active Directory Domain Services (AAD DS).  In my first post I covered so some of the basic configuration settings of the a default service instance.  In this post I’m going to dig a bit deeper and look at network flows, what type of secure tunnels are available for LDAPS, and examine the authentication protocols and supporting cipher suites are configured for the service.

To perform these tests I leveraged a few different tools.  For a port scanner I used Zenmap.  To examine the protocols and cipher suites supported by the LDAPS service I used a custom openssl binary running on an Ubuntu VM in Azure.  For examination of the authentication protocol support I used Samba’s smbclient running on the Ubuntu VM in combination with WinSCP for file transfer, tcpdump for packet capture, and WireShark for packet analysis.

Let’s start off with examining the open ports since it takes the least amount of effort.  To do that I start up Zenmap and set the target to one of the domain controllers (DCs) IP addresses, choose the intense profile (why not?), and hit scan.  Once the scan is complete the results are displayed.


Navigating to the Ports / Hosts tab displays the open ports. All but one of them are straight out of the standard required ports you’d see open on a Windows Server functioning as an Active Directory DC.  An opened port 443 deserves more investigation.


Let’s start with the obvious and attempt to hit the IP over an HTTPS connection but no luck there.


Let’s break out Fiddler and hit it again.  If we look at the first session where we build the secure tunnel to the website we see some of the details for the certificate being used to secure the session.  Opening the TextView tab of the response shows a Subject of CN=DCaaS Fleet Dc Identity Cert – 0593c62a-e713-4e56-a1be-0ef78f1a2793.  Domain Controller-as-a-Service, I like it Microsoft.  Additionally Fiddler identifies the web platform as the Microsoft HTTP Server API (HTTP.SYS).  Microsoft has been doing a lot more that API since it’s much more lightweight than IIS.  I wanted to take a closer look at the certificate so I opened the website in Dev mode in Chrome and exported it.  The EKUs are normal for a standard use certificate and it’s self-signed and untrusted on my system.  The fact that the certificate is untrusted and Microsoft isn’t rolling it out to domain-joined members tells me whatever service is running on the port isn’t for my consumption.

So what’s running on that port?  I have no idea.  The use of the HTTP Server API and a self-signed certificate with a subject specific to the managed domain service tells me it’s providing access to some type of internal management service Microsoft is using to orchestrate the managed domain controllers.  If anyone has more info on this, I’d love to hear it.


Let’s now take a look at how Microsoft did at securing LDAPS connectivity to the managed domain.  LDAPS is not enabled by default in the managed domain and needs to be configured through the Azure AD Domain Services blade per these instructions.  Oddly enough Microsoft provides an option to expose LDAPS over the Internet.  Why any sane human being would ever do this, I don’t know but we’ll cover that in a later post.

I wanted to test SSLv3 and up and I didn’t want to spend time manipulating registry entries on a Windows client so I decided to spin up an Ubuntu Server 17.10 VM in Azure.  While the Ubuntu VM was spinning up, I created a certificate to be used for LDAPS using the PowerShell command referenced in the Microsoft article and enabled LDAPS through the Azure AD Domain Services resource in the Azure Portal.  I did not enable LDAPS for the Internet for these initial tests.

After adding the certificate used by LDAPS to the trusted certificate store on the Windows Server, I opened LDP.EXE and tried establishing LDAPS connection over port 636 and we get a successful connection.


Once I verified the managed domain was now supporting LDAPS connections I switched over to the Ubuntu box via an SSH session.  Ubuntu removed SSLv3 support in the OpenSSL binary that comes pre-packaged with Ubuntu so to test it I needed to build another OpenSSL binary.  Thankfully some kind soul out there on the Interwebz documented how to do exactly that without overwriting the existing version.  Before I could build a new binary I had to add re-install the Make package and add the Gnu Compiler Collection (GCC) package using the two commands below.

  • sudo apt-get install –reinstall make
  • sudo apt-get install gcc

After the two packages were installed I built the new binary using the instructions in the link, tested the command, and validated the binary now includes SSLv3.


After Poodle hit the news back in 2014, Microsoft along with the rest of the tech industry advised SSLv3 be disabled.  Thankfully this basic well known vulnerability has been covered and SSLv3 is disabled.


SSLv3 is disabled, but what about TLS 1.0, 1.1, and 1.2?  How about the cipher suites?  Are they aligned with NIST guidance?  To test that I used a tool named TestSSLServer by Thomas Pornin.  It’s a simple command line tool which makes cycling through the available cipher suites quick and easy.


The options I chose perform the following actions:

  • -all -> Perform an “exhaustive” search across cipher suites
  • -t 1 -> Space out the connections by one second
  • -min tlsv1 -> Start with TLSv1

The command produces the output below.

server selection: enforce server preferences
3– (key: RSA) RSA_WITH_AES_256_CBC_SHA
3– (key: RSA) RSA_WITH_AES_128_CBC_SHA
3– (key: RSA) RSA_WITH_RC4_128_SHA
3– (key: RSA) RSA_WITH_RC4_128_MD5
TLSv1.1: idem
server selection: enforce server preferences
3f- (key: RSA) ECDHE_RSA_WITH_AES_256_CBC_SHA384
3f- (key: RSA) ECDHE_RSA_WITH_AES_128_CBC_SHA256
3f- (key: RSA) DHE_RSA_WITH_AES_256_GCM_SHA384
3f- (key: RSA) DHE_RSA_WITH_AES_128_GCM_SHA256
3– (key: RSA) RSA_WITH_AES_256_GCM_SHA384
3– (key: RSA) RSA_WITH_AES_128_GCM_SHA256
3– (key: RSA) RSA_WITH_AES_256_CBC_SHA256
3– (key: RSA) RSA_WITH_AES_128_CBC_SHA256
3– (key: RSA) RSA_WITH_AES_256_CBC_SHA
3– (key: RSA) RSA_WITH_AES_128_CBC_SHA
3– (key: RSA) RSA_WITH_RC4_128_SHA
3– (key: RSA) RSA_WITH_RC4_128_MD5

As can be seen from the bolded output above, Microsoft is still supporting the RC4 cipher suites in the managed domain. RC4 has been known to be a vulnerable algorithm for years now and it’s disappointing to see it still supported especially since I haven’t seen any options available to disable within the managed domain. While 3DES still has a fair amount of usage, there have been documented vulnerabilities and NIST plans to disallow it for TLS in the near future. While commercial customers may be more willing to deal with the continued use of these algorithms, government entities will not.

Let’s now jump over to Kerberos and check out what cipher suites are supported by the managed DC. For that we pull up ADUC and check the msDS-SupportedEncryptionTypes attribute of the DC’s computer object. The attribute is set to a value of 28, which is the default for Windows Server 2012 R2 DCs. In ADUC we can see that this value translates to support of the following algorithms:


Again we see more support for RC4 which should be a big no no in the year 2018. This is a risk that orgs using AAD DS will need to live with unless Microsoft adds some options to harden the managed DCs.

Last but not least I was curious if Microsoft had support for NTLMv1. By default Windows Server 2012 R2 supports NTLMv1 due to requirements for backwards compatibility. Microsoft has long recommended disabling NTLMv1 due to the documented issues with the security of the protocol. So has Microsoft followed their own advice in the AAD DS environment?

To check this I’m going use Samba’s smbclient package on the Ubuntu VM. I’ll use smbclient to connect to the DC’s share from the Ubuntu box using the NTLM protocol. Samba has enforced the use NTLMV2 in smbclient by default so I needed to make some modifications to the global section of the smb.conf file by adding client ntlmv2 auth = no. This option disables NTLMv2 on smbclient and will force it to use NTLMv1.


After saving the changes to smb.conf I exit back to the terminal and try opening a connection with smbclient. The options I used do the following:

  • -L -> List the shares on my DC’s IP address
  • -U -> My domain user name
  • -m -> Use the SMB2 protocol


While I ran the command I also did a packet capture using tcpdump which I moved over to my Windows box using WinSCP.  I then opened the capture with WireShark and navigated to the packet containing the Session Setup Request.  In the parsed capture we don’t see an NTLMv2 Response which means NTLMv1 was used to authenticate to the domain controller indicating NTLMv1 is supported by the managed domain controllers.


Based upon what I’ve observed from poking around and running these tests I’m fairly confident Microsoft is using a very out-of-the-box configuration for the managed Windows Active Directory domain.  There doesn’t seem to be much of an attempt to harden the domain against some of the older and well known risks.  I don’t anticipate this offering being very appealing to organizations with strong security requirements.  Microsoft should look to offer some hardening options that would be configurable through the Azure Portal.  Those hardening options are going to need to include some type of access to the logs like I mentioned in my last post.  Anyone who has tried to rid their network of insecure cipher suites or older authentication protocols knows the importance of access to the domain controller logs to the success of that type of effort.

My next post will be the final post in this series.  I’ll cover the option Microsoft provides to expose LDAPS to the Internet (WHY OH WHY WOULD YOU DO THAT?), summarize my findings, and mention a few other interesting things I came across during the study for this series.


Deep Dive into Azure AD Domain Services – Part 1

Deep Dive into Azure AD Domain Services  – Part 1

Hi everyone.  In this series of posts I’ll be doing a deep dive into Microsoft’s Azure AD Domain Services (AAD DS).  AAD DS is Microsoft’s managed Windows Active Directory service offered in Microsoft Azure Infrastructure-as-a-Service intended to compete with similar offerings such as Amazon Web Services’s (AWS) Microsoft Active Directory.  Microsoft’s solution differs from other offerings in that it sources its user and group information from Azure Active Directory versus an on-premises Windows Active Directory or LDAP.

Like its competitors Microsoft realizes there are still a lot of organizations out there who are still very much attached to legacy on-premises protocols such as NTLM, Kerberos, and LDAP.  Not every organization (unfortunately) is ready or able to evolve its applications to consume SAML, Open ID Connect, OAuth, and Rest-Based APIs (yes COTS vendors I’m talking to you and your continued reliance on LDAP authentication in the year 2018).  If the service has to be there, it makes sense to consume a managed service so staff can focus less on maintaining legacy technology like Windows Active Directory and focus more on a modern Identity-as-a-Service (IDaaS), Software-as-a-Service (SaaS), and Platform-as-a-Service (Paas) solutions.

Sounds great right?  Sure, but how does it work?  Microsoft’s documentation does a reasonable job giving the high level details of the service so I encourage you to read through it at some point.  I won’t be covering information included in that documentation unless I notice a discrepancy or an area that could use more detail.  Instead, I’m going to focus on the areas which I feel are important to understand if you’re going to attempt to consume the service in the same way you would a traditional on-premises Windows Active Directory.

With that introduction, let’s dig in.

The first thing I did was to install the Remote Server Administration Tools (RSAT) for Active Directory Domain Services and Group Policy Management tools.  I used these tools to explore some of the configuration choices Microsoft made in the managed service.  I also installed Microsoft Network Monitor 3.4 to review packet captures  captured using the netsh.

After the tools were installed I started a persistent network capture using netsh using an elevated command prompt.  This is an incredibly useful feature of Windows when you need to debug issues that occur prior or during user or system logon.  I’ve used this for years to troubleshoot a number of Windows Active Directory issues including slow logons and failed logons.  The only downfall of this is you’re forced into using Microsoft Network Monitor or Microsoft Message Analyzer to review the packet captures it creates.  While Microsoft Message Analyzer is a sleek tool, the resources required to run it effectively are typically a non-starter for a lab or traditional work laptop so I tend to use Network Monitor.


After the packet capture was started I went through the standard process of joining the machine to the domain and rebooting the computer.  After reboot, I logged in an account in the AAD DC Administrators Azure Active Directory group, started an elevated command prompt as the VM’s local administrator and stopped the packet capture.  This provided me with a capture of the domain join, initial computer authentication, and initial user authentication.


While I know you’re as eager to dig into the packet capture as I am, I’ll cover that in a future post.  Instead I decided to break out the RSAT tools and poke around at configuration choices an administrator would normally make when building out a Windows Active Directory domain.

Let’s first open the tool everyone who touches Windows Active Directory is familiar with, Active Directory Users and Computers (ADUC).  The data layout (with Advanced Features option on) for organizational units (OUs) and containers looks very similar to what we’re used to seeing with the exception of the AADC Computers, AADC Users, AADDSDomainAdmin OUs, and AADDSDomainConfig container.  I’ll get into these containers in a minute.


If we right-click the domain node and go to properties we see that the domain and forest are running in Windows Server 2012 R2 domain and forest functional level with no trusts defined.  Examining the operating system tab of the two domain controllers in the Domain Controllers OU shows that both boxes run Windows Server 2012 R2.  Interesting that Microsoft chose not to use Windows Server 2016.


Navigating to the Security tab and clicking the Advanced button shows that the AAD DC Administrator group has only been granted the Create Organizational Unit objects permission while the AAD DC Service Accounts group has been granted Replicating Directory Changes.  As you can see from these permissions the base of the directory tree is very locked down.


Let me circle back to the OUs and Containers I talked about above.  The AADDC Computers and AADDC Users OUs are the default OUs Microsoft creates for you.  Newly joined machines are added to the AADDC Computers OU and users synchronized from the Azure AD tenant are placed in the AADDC Users OU.  As we saw from the permissions above, we could use an account in the AAD DC Administrators group to create additional OUs under the domain node to delegate control to another set of more restricted admins, for the purposes of controlling GPOs if security filtering doesn’t meet our requirements, or for creating additional service accounts or groups for the workloads we deploy in the environment.  The permissions within the default OUs are very limited.  In the AADDC Computer OU GPOs can be applied and computer objects can be added and removed.  In the AADC Users OU only GPOs can be applied which makes sense considering the user and group objects stored there are sourced from your authoritative Azure AD tenant.

The AADDSDomainAdmin OU contains a single security group named AADDS Service Administrators Group (pre-Windows 2000 name of AADDSDomAdmGroup).  The group contains a single member names dcaasadmin which is the renamed built-in Active Directory administrator account.  The group is nested into a number of highly privileged built-in Active Directory groups including Administrators, Domain Admins, Domain Users, Enterprise Admins, and Schema Admins.  I’m very uncomfortable with Microsoft’s choice to make a “god” group and even a “god” user of the built-in administrator.  This directly conflicts with security best practices for Active Directory which would see no account being a permanent member of these highly privileged groups or at the least divvying up the privileges among separate security principals.  I would have liked to see Microsoft leverage a Red Forest Red Forest  design here.  Hopefully we’ll see some improvements as the service matures.  I’m unsure as to the purpose of the OU and this group at this time.


The AADDSDomainConfig container contains a single container object named SchemaUpdate.  I reviewed the attributes of both containers hoping to glean some idea of the purpose of the containers and the only thing I saw of notice was the revision attribute was set to 2.  Maybe Microsoft is tracking the schema of their standard managed domain image via this attribute?  In a future post in this series I’ll do a comparison of this managed domain’s schema with a fresh Windows Server 2012 R2 schema.


Opening Active Directory Sites and Services shows that Microsoft has chosen to leave the domain with a single site.  This design choice makes sense given that a limitation of AAD DS is that it can only serve a single region.  If that limitation is ever lifted, Microsoft will need to revisit this choice and perhaps include a site for each region.   Expanding the Default-First-Site-Name site and the Servers node shows the two domain controllers Microsoft is using to provide the Windows Active Directory service to the VNet.


So the layout is simple, what about the group policy objects (GPO)?  Opening up the Group Policy Management Console displays five GPOs which are included in every managed domain.


The AADDC Users GPO is empty of settings while the AADC Computers GPO has a single Preference defined that adds the AAD DC Administrators group to the built-in Administrators group on any member servers added to the OU.  The Default Domain Controllers Policy (DDCP) GPO is your standard out of the box DDCP with nothing special set.  The Default Domain Policy (DDP) GPO on the other hand has a number of settings applied.  The password policy is interesting… I get that you have the option to source all the user accounts within your AAD DS domain from Azure AD, but Microsoft is still giving you the ability to create user accounts in the managed domain as I covered above which makes me uncomfortable with the default password policy.  Microsoft hasn’t delegated the ability to create Fine Grained Password Policies (FGPPs) either, which means you’re stuck with this very lax password policy.  Given the lack of technical enforcement, I’d recommend avoiding creating user accounts directly in the managed domain for any purpose until Microsoft delegates the ability to create FGPPs.  The remaining settings in the policy are standard out of the box DDP.


The GPO named Event Log GPO is linked to the Domain Controllers OU and executes a startup script named EventLogRetentionPolicy.PS1.  Being the nosy geek I am, I dug through SYSVOL to find the script.  The script is very simple in that it sets each event log to overwrite events over 31 days old.  It then verifies the results and prints the results to the console.  Event logs are an interesting beast in AAD DS.  An account in the AAD DC Administrators group doesn’t have the right to connect to the Event Logs on the DCs remotely and I haven’t come across any options to view those logs.  I don’t see any mention of them in the Microsoft documentation, so my assumption is you don’t get access to them at this time.  I have to imagine this is a show stopper for some organizations considering the critical importance of Domain Controller logs.  If anyone knows how to access these logs, please let me know.  I’d like to see Microsoft incorporate an option to send the logs to a syslog agent via a configuration option in the Azure AD Domain Services blade in the Azure Portal.

I’m going to stop here today.  In my next post I’ll do some poking around by running a port scan against the managed domain controllers to see what network flows are open, enable LDAPS to see what the SSL/TLS landscape looks like, and examine authentication protocols and algorithms supported (NTLMv1,v2, Kerberos DES, etc).  Thanks for reading!

Integrating Azure AD and G-Suite – Automated Provisioning

Integrating Azure AD and G-Suite – Automated Provisioning

Today I’ll wrap up my series on Azure Active Directory’s (Azure AD) integration with Google’s G-Suite.  In my first entry I covered the single-sign on (SSO) integration and in my second and third posts I gave an overview of Google’s Cloud Platform (GCP) and demonstrated how to access a G-Suite domain’s resources through Google’s APIs.  In this post I’m going to cover how Microsoft provides automated provisioning of user, groups, and contacts .  If you haven’t read through my posts on Google’s API (part 1, part 2) take a read through so you’re more familiar with the concepts I’ll be covering throughout this post.

SSO using SAML or Open ID Connect is a common capability of most every cloud solutions these days.  While that solves the authentication problem, the provisioning of users, groups, and other identity-relates objects remains a challenge largely due to the lack of widely accepted standards (SCIM has a ways to go folks).  Vendors have a variety of workarounds including making LDAP calls back to a traditional on-premises directory (YUCK), supporting uploads of CSV files, or creating and updating identities in its local databases based upon the information contained in a SAML assertion or Open ID Connect id token.  A growing number of vendors are exposing these capabilities via a web-based API.  Google falls into this category and provides a robust selection of APIs to interact with its services from Gmail to resources within Google Cloud Platform, and yes even Google G-Suite.

If you’re a frequent user of Azure AD, you’ll have run into the automatic provisioning capabilities it brings to the table across a wide range of cloud services.  In a previous series I covered its provisioning capabilities with Amazon Web Services.  This is another use case where Microsoft leverages a third party’s robust API to simplify the identity management lifecycle.

In the SSO Quickstart Guide Microsoft provides for G-Suite it erroneously states:

“Google Apps supports auto provisioning, which is by default enabled. There is no action for you in this section. If a user doesn’t already exist in Google Apps Software, a new one is created when you attempt to access Google Apps Software.”

This simply isn’t true.  While auto provisioning via the API can be done, it is a feature you need to code to and isn’t enabled by default.  When you enable SSO to G-Suite and attempt to access it using an assertion containing the claim for a user that does not exist within a G-Suite domain you receive the error below.


This establishes what we already knew in that identities representing our users attempting SSO to G-Suite need to be created before the users can authenticate.  Microsoft provides a Quickstart for auto provisioning into G-Suite.  The document does a good job telling you were to click and giving some basic advice but really lacks in the detail into what’s happening in the background and describing how it works.

Let’s take a deeper look shall we?

If you haven’t already, add the Google Apps application from the Azure AD Application Gallery.  Once the application is added navigate to the blade for the application and select the Provisioning page.  Switch the provisioning mode from manual to automatic.


Right off the bat we see a big blue Authorize button which tells us that Microsoft is not using the service accounts pattern for accessing the Google API.  Google’s recommendation is to use the service account pattern when accessing project-based data rather than user specific data.  The argument can be made that G-Suite data doesn’t fall under project-based data and the service account credential doesn’t make sense.  Additionally using a service account would require granting the account domain-wide delegation for the G-Suite domain allowing the account to impersonate any user in the G-Suite domain.  Not really ideal, especially from an auditing perspective.

By using the Server-side Web Apps pattern a new user in G-Suite can be created and assigned as the “Azure AD account”. The downfall with of this means you’re stuck paying Google $10.00 a month for a non-human account. The price of good security practices I guess.


Microsoft documentation states that the account must be granted the Super Admin role. I found this surprising since you’re effectively giving the account god rights to your G-Suite domain. It got me wondering what authorization scopes is Microsoft asking for? Let’s break out Fiddler and walk through the process that kicks off after clicking on the Authorization button.

A new window pops up from Google requesting me to authenticate. Here Azure AD, acting as the OAuth client, has made an authorization request and has sent me along with the request over to the Google which is acting as the authorization server to authenticate, consent to the access, and take the next step in the authorization flow.


When I switch over to Fiddler I see a number of sessions have been captured.  Opening the WebForms window of the first session to a number of parameters that were passed to Google.


The first parameter gives us the three authorization scopes Azure AD is looking for.  The and are scopes are both related to the Google Directory API, which makes sense if it wants to manage users and groups.  The /m8/feeds scope grants it access to manage contacts via the Google Contacts API.  This is an older API that uses XML instead of JSON to exchange information and looks like it has been/is being replaced by the Google People API.

Management of contacts via this API is where the requirement for an account in the Super Admin role originates.  Google documentation states that management of domain shared contacts via the /m8/feeds API requires an administrator username and password for Google Apps.  I couldn’t find any privilege in G-Suite which could be added to a custom Admin role that mentioned contacts.  Given Google’s own documentation along the lack of an obvious privilege option, this may be a hard limitation of G-Suite.  Too bad too because there are options for both Users and Groups.  Either way, the request for this authorization scope drives the requirement for Super Admin for the account Azure AD will be using for delegated access.

The redirect_uri is the where Google sends the user after the authorization request is complete.  The response_type tells us Azure AD and Google are using the OAuth authorization code grant type flow.  The client_id is the unique identifier Google has assigned to Azure AD in whatever project Microsoft has it built in.  The approval_prompt setting of force tells Google to display the consent window and the data Azure AD wants to access.  Lastly, the access_type setting of offline allows Azure AD to access the APIs without the user being available to authenticate via a refresh token which will be issued along with the access token.  Let’s pay attention to that one once the consent screen pops up.

I plug in valid super user credentials to my G-Suite domain and authenticate and receive the warning below.  This indicates that Microsoft has been naughty and hasn’t had their application reviewed by Google.  This was made a requirement back in July of 2017… so yeah… Microsoft maybe get on that?


To progress to the consent screen I hit the Advanced link in the lower left and opt to continue.  The consent window then pops up.


Here I see that Microsoft has registered their application with a friendly name of  I’m also shown the scopes that the application wants to access which jive with the authorization scopes we saw in Fiddler.  Remember that offline access Microsoft asked for?  See it mentioned anywhere in this consent page that I’m delegating this access to Microsoft perpetually as long as they ask for a refresh token?  This is one of my problems with OAuth and consent windows like this.  It’s entirely too vague as to how long I’m granting the application access to my data or to do things as me.  Expect to see this OAuth consent attacks continue to grow in in use moving forward.  Why worry about compromising the user’s credentials when I can display a vague consent window and have them grant me access directly to their data?  Totally safe.

Hopping back to the window, I click the Allow button and the consent window closes.  Looking back at Fiddler I see that I received back an authorization code and posted it back to the reply_uri designated in the original authorization request.


Switching back to the browser window for the Azure Portal the screen updates and the Test Connection button becomes available.  Clicking the button initiates a quick check where Azure AD obtains an access token for the scopes it requires unseen to the user.  After the successful test I hit the Save button.


Switching to the browser window for the Google Admin Portal let’s take a look at the data that’s been updated for the user I used to authorize Microsoft its access.  For that I select the user, go to the Security section and I now see that the Azure Active Directory service is authorized to the contacts, user, and group management scopes.


Switching back to the browser window for the Azure Portal I see some additional options are now available.


The mappings are really interesting and will look familiar to you if you’ve ever done anything with an identity management tool like Microsoft Identity Manager (MIM) or even Azure AD Sync.  The user mappings for example show which attributes in Azure AD are used to populate the attributes in G-Suite.


The attributes that have the Delete button grayed out are required by Google in order to provision new user accounts in a G-Suite domain.  The options available for deletion are additional data beyond what is required that Microsoft can populate on user accounts it provisions into G-Suite.  Selecting the Show advanced options button, allow you to play with the schema Microsoft is using for G-Suite.   What I found interesting about this schema is it doesn’t match the resource representation Google provides for the API.  It would have been nice to match the two to make it more consumable, but they’re probably working off values used in the old Google Provisioning API or they don’t envision many people being nerdy enough to poke around the schema.

Next up I move toggle the provisioning status from Off to On and leave the Scope option set to sync only the assigned users and groups.


I then hit the Save button to save the new settings and after a minute my initial synchronization is successful.  Now nothing was synchronized, but it shows the credentials correctly allowed Azure AD to hit my G-Suite domain over the appropriate APIs with the appropriate access.


So an empty synchronization works, how about one with a user?  I created a new user named with only the required attributes of display name and user principal name populated, assigned the new user to the Google Apps application and give Azure AD a night to run another sync.  Earlier tonight I checked the provisioning summary and verified the sync grabbed the new user.


Review of the audit logs for the Google Apps application shows that the new user was exported around 11PM EST last night.  If you’re curious the synch between Azure AD and G-Suite occurs about every 20 minutes.


Notice that the FamilyName and GivenName attributes are set to a period.  I never set the first or last name attributes of the user in Azure AD, so both attributes are blank.  If we bounce back to the attribute mapping and look at the attributes for Google Apps, we see that FamilyName and GivenName are both required meaning Azure AD had to populate them with something.  Different schemas, different requirements.


Switching over to the Google Admin Console I see that the new user was successfully provisioned into G-Suite.


Pretty neat overall.  Let’s take a look at what we learned:

  • Azure AD supports single sign-on to G-Suite via SAML using a service provider-initiated flow where Azure AD acts as the identity provider and G-Suite acts as the service provider.
  • A user object with a login id matching the user’s login id in Azure Active Directory must be created in G-Suite before single sign-on will work.
  • Google provides a number of libraries for its API and the Google API Explorer should be used for experimentation with Google’s APIs.
  • Google’s Directory API is used by Azure AD to provision users and groups into a G-Suite domain.
  • Google’s Contacts API is used by Azure AD to provision contacts into a G-Suite domain.
  • A user holding the Super Admin role in the G-Suite domain must be used to authorize Azure AD to perform provisioning activities.  The Super Admin role is required due to the usage of the Google Contact API.
  • Azure AD’s authorization request includes offline access using refresh tokens to request additional access tokens to ensure the sync process can be run on a regular basis without requiring re-authorization.
  • Best practice is to dedicate a user account in your G-Suite domain to Azure AD.
  • Azure AD uses the Server-side Web pattern for accessing Google’s APIs.
  • The provisioning process will populate a period for any attribute that is required in G-Suite but does not have a value in the corresponding attribute in Azure AD.
  • The provisioning process runs a sync every 20 minutes.

Even though my coding is horrendous, I absolutely loved experimenting with the Google API.  It’s easy to realize why APIs are becoming so critical to a good solution.  With the increased usage of a wide variety of products in a business, being able to plug and play applications is a must.  The provisioning aspect Azure AD demonstrates here is a great example of the opportunities provided when critical functionality is exposed for programmatic access.

I hope you enjoyed the series, learned a bit more about both solutions, and got some insight into what’s going on behind the scenes.


Integrating Azure AD and G-Suite – Google API Integration Part 2

Integrating Azure AD and G-Suite – Google API Integration Part 2

Welcome back.

We’ve had an interesting journey so far in this series exploring the Azure Active Directory (Azure AD) and Google G-Suite (G-Suite) integration.  In the first entry I covered how the single sign-on integration works.  The second entry covered explored the process of registering an application with the Google Cloud Platform (GCP) for API access and GCP’s relationship to G-Suite.  Today I demonstrate interaction with Google’s API through the use of some very basic .NET console applications I’ve put together.

As you’ll recall from the second entry, I created a new project with GCP and created a service account identity for my sample application and the Google Admin API has been enabled for the project.  The application has been granted domain-wide delegation rights to my G-Suite domain at  Finally, the application’s client ID added to my G-Suite domain and has been granted access to the scope.  The application has an identity, credentials to authenticate, and has been authorized with appropriate access.  Let’s get to some coding (or the mess of garbled logic that accounts for my coding ability 🙂 ).

Google provides API client libraries for a number of languages.  For the purposes of this demonstration, I’ll be using the .NET client library.  Play it smart and leverage the libraries vendors provide for ease of integration as well as avoiding critical mistakes that could impact the security of your API calls.  Now using libraries doesn’t mean you get to ignore what goes on behind the curtains.  It’s critical to understand what the libraries are doing to ensure they’re doing things securely and for the purposes of working out bugs.  APIs in the cloud are changing on a daily basis, don’t count on the libraries keeping up with the vendor pace.  The last thing you want to run into is an situation where your application kicks the bucket due to an API change that isn’t reflected in the library and you’re stuck with a broken production application.

With that lecture out of the way, let’s get into the code for the demo application.

My first step is to open a new project in Visual Studio creating a Visual C# Console App using the .NET Framework.  Before I jump into using Google’s libraries I’m going to do the exact opposite of what I advised you to do above.  Yes folks, I’m going to make a call to the API without using Google’s libraries for the purposes of demonstrating the steps involved in acquiring an access token and sending that access token to the API to retrieve data.  I’ll then demonstrate how much simpler it is to use the library so you’ll appreciate the value of them.

As I explained in my last post, Google APIs use OAuth 2.0 for delegated access to data.  Google does a good job explaining the basic steps in obtaining an access token in this article so I won’t go too deeply into detail.  Instead I’m going to walk through putting these steps into code.  Fair warning, this code is meant for demo purposes only.  This means I’m not going to do any data validation, exception catching, or exercise secure coding practices.  Please please please don’t use my terrible coding practices in any real applications you develop.

Ready?  Let’s begin.

The article above first instructs you to obtain OAuth 2.0 credential which I did in my last entry.  The second step is to obtain an access token from the Google Authorization Server.  I’ll be following the instructions for the OAuth 2.0 for Service Accounts since I’m building a console application for simplicity purposes.

The first thing we need to is collect a few pieces of information.  In the service account scenario we need to know the unique identifier of the service account and the scopes we want to access.  For that I pop open a browser and go to the my project in the console for GCP.  From there I navigate to the APIs & Services section and select the credentials option.


On the Credentials page I click on my service account to bring up the relevant information.  The service account field contains the unique identifier (or email address as Google refers to it) of the service account I setup.

google2int2.pngThis identifier is one of the pieces of information I’ll need to include in the JSON Web Token (JWT) I’m going to send to Google’s API to obtain an access token.  In addition to the service account identifier, I also need the scopes I want to access, Google Authorization Server endpoint, G-Suite user I’m impersonating, the expiration time I want for the access token (maximum of one hour), and the time the assertion was issued.


Now that I’ve collected the information I need for my claim set, I need to grab the private key from the PKCS#12 (.p12) certificate I was issued when I setup my service account.

google2int4.pngAt this point I have all the components I need to assemble my JWT.  For that I’m going to add the jose-jwt library to my application and leverage it to simplify the creation of the JWT.


Once the library is added I create the JWT.  Google requires it include a base64 encoded header, base64 encoded claim set, and the signature providing my application’s identity and ensuring the integrity of the JWT.  The signature is creating using the RSA-SHA256 signing algorithm, which is the only algorithm Google supports at this time.

I quickly debug the app, dump the token variable to a string, copy it into Fiddler’s Text Wizard and Base64 decode it, we see the JWT we’ll be passing to Google.

google2int7.pngI now have my JWT assembled and am ready to deliver to Google for an access token request.  The next step is to submit it to Google’s authorization server.  For that I need to a do a few things.  The first thing I need to is create a new class that will act as the data model for JSON response Google will return to me after I submit my authorization request.  I’ll deserialize the JWT I receive back using this data model.


Next I create a new task that will accept a base web address, uri, and a JWT.  The task then uses an instance of the HttpClient class to post it to Google’s authorization server and to capture the JSON response.

google2int9I then call the new task, pass the appropriate parameters, deserialize the JSON response using the data model I created earlier, and dump the bearer access token into a string variable.

google2int10So I have my bearer access token that allows me to impersonate the user within my G-Suite domain for the purposes of hitting the Google Directory API. I whip up another task that will deliver the bearer access token to the Google Directory API and the JSON response which will include details about the user.

google2int11Finally I call the task and dump the JSON response to the console.

google2int12.pngThe results look like the below.


Victory!  Whew a lot of work there to pull a small amount of data from an API.   I had to create models for the data (you’ll notice I got lazy at the end and didn’t create a user model), properly secure the JWT for the access token request, manage my own http clients, and handle the JSON responses.  It ended up being a fair amount of code for a relatively simple process.

How much easier is it using Google’s .NET library?  Let’s take a look at it.

In the code below I create a service account credential which handles the process of creating the appropriate JWT access token.


Next up I create a new instance of the DirectoryService class which will represent the connection to the Google Directory API.  This class the assembly of the eventual POST to obtain the bearer access token.


Last but not least I submit a request for user information for using the instance of the DirectoryService class I created and placing the JSON response into instance of the User class included in the Directory API which will serve as the data model for the response.

Just a tad bit easier using the library right? If we break out Fiddler we see that four sessions have been created.

google2int17Session 1 and 2 display the submission of the JWT and acquisition of the bearer token.


Sessions 3 and 4 display the delivery of the bearer token to the API and the JSON response received back.


What did we learn from these past two posts (besides the fact I’m a terrible developer)?  We saw programmatic access to G-Suite is very dependent on foundational components of GCP.  Application identities are provisioned in the Google’s IAM Service instance associated with the GCP Project.  The appropriate APIs must be enabled for the GCP project that the application needs to use.  The application must then be granted access to the appropriate authorization scopes within the G-Suite domain and Google provides an option for the application to impersonate users within the G-Suite domain rather than relying upon the user’s consent.

My hope is the biggest lesson you took from these two posts is how valuable it is to understand the inner workings of how a vendor leverages a technology.  Vendor-provided libraries are wonderful and make our lives far easier and our code potentially more simple and secure, but nothing is ever more powerful than understanding the inner workings of the foundational technology.  The knowledge you take at the technology level will translate across all vendors you encounter making grasping that new product all that much easier.  Take your time, dig deep into the weeds, and enjoy the journey into the technology.

See you in the next post where I’ll wrap up this series and break down the Microsoft Azure AD identity provisioning integration with G-Suite.  Have a great week!



Integrating Azure AD and G-Suite – Google API Integration Part 1

Integrating Azure AD and G-Suite – Google API Integration Part 1

Hi everyone,

Welcome to the second post in my series on the integration between Azure Active Directory (Azure AD) and Google’s G-Suite (formally named Google Apps).  In my first entry I covered the single sign-on (SSO) integration between the two solutions.  This included a brief walkthrough of the configuration and an explanation of how the SAML protocol is used by both solutions to accomplish the SSO user experience.  I encourage you to read through that post before you jump into this.

So we have single sign-on between Azure AD and G-Suite, but do we still need to provision the users and group into G-Suite?  Thanks to Google’s Directory Application Programming Interface (API) and Azure Active Directory’s (Azure AD) integration with it, we can get automatic provisioning into G-Suite .  Before I cover how that integration works, let’s take a deeper look at Google’s Cloud Platform (GCP) and its API.

Like many of the modern APIs out there today, Google’s API is web-based and robust. It was built on Google’s JavaScript Object Notation (JSON)-based API infrastructure and uses Open Authorization 2.0 (OAuth 2.0) to allow for delegated access to an entities resources stored in Google. It’s nice to see vendors like Microsoft and Google leveraging standard protocols for interaction with the APIs unlike some vendors… *cough* Amazon *cough*. Google provides software development kits (SDKs) and shared libraries for a variety of languages.

Let’s take a look at the API Explorer.  The API Explorer is a great way to play around with the API without the need to write any code and to get an idea of the inputs and outputs of specific API calls.  I’m first going to do something very basic and retrieve a listing of users in my G-Suite directory.  Once I access the API explorer I hit the All Versions menu item and select the Admin Directory API.


On the next screen I navigate down to the directory.users.list method and select it.  On the screen that follows I’m provided with a variety of input fields.  The data I input into these fields will affect what data is returned to me from the API. I put the domain name associated with my G-Suite subscription and hit the Authorize and Execute button.  A new window pops up which allows me to configure which scope of access I want to grant the API Explorer.  I’m going to give it just the scope of


I then hit the Authorize and Execute button and I’m prompted to authenticate to Google and delegate API Explorer to access data I have permission to access in my G-Suite description.   Here I plug in the username and password for a standard user who isn’t assigned to any G-Suite Admin Roles.


After successfully authenticating, I’m then prompted for consent to delegate API Explorer to view the users configured in the user’s G-Suite directory.


I hit the allow button, the request for delegated access is complete, and a listing of users within my G-Suite directory are returned in JSON format.


Easy right?  How about we step it up a notch and create a new user.  For that operation I’ll be delegating access to API Explorer using an account which has been granted the G-Suite User Management Admin role.  I navigate back to the main list of methods and choose the directory.users.insert method.  I then plug in the required values and hit the Authorize and Execute button.  The scopes menu pops up and I choose the scope to allow for provisioning of the user and then hit the Authorize and Execute button.  The request is made and a successful response is returned.


Navigating back to G-Suite and looking at the listing of users shows the new user Marge Simpson as appearing as created.


Now that we’ve seen some simple samples using API Explorer let’s talk a bit about how you go about registering an application to interact with Google’s API as well as covering some basic Google Cloud Platform (GCP) concepts.

First thing I’m going to do is navigate to Google’s Getting Started page and create a new project.  So what is a project?  This took a bit of reading on my part because my prior experience with GCP was non-existent.  Think of a Google Project like an Amazon Web Services (AWS) account or a Microsoft Azure Subscription.  It acts as a container for billing, reporting, and organization of GCP resources.  Projects can be associated with a Google Cloud Organization (similar to how multiple Azure subscriptions can be associated with a single Microsoft Azure Active Directory (Azure AD) tenant) which is a resource available for a G-Suite subscription or Google Cloud Identity resource.  The picture below shows the organization associated with my G-Suite subscription.


Now that we have the concepts out of the way, let’s get back to the demo. Back at the Getting Started page, I click the Create a new project button and authenticate as the super admin for my G-Suite subscription. I’ll explain why I’m using a super admin later. On the next screen I name the project JOG-NET-CONSOLE and hit the next button.


The next screen prompts me to provide a name which will be displayed to the user when the user is prompted for consent in the instance I decide to use an OAuth flow which requires user consent.


Next up I’m prompted to specify what type of application I’m integrated with Google.  For this demonstration I’ll be creating a simple console app, so I’m going to choose Web Browser simply to move forward.  I plug in a random unique value and click Create.


After creation is successful, I’m prompted to download the client configuration and provided with my Client ID and Client Secret. The configuration file is in JSON format that provides information about the client’s registration and information on authorization server (Google’s) OAuth endpoints. This file can be consumed directly by the Google API libraries when obtaining credentials if you’re going that route.


For the demo application I’m building I’ll be using the service account scenario often used for server-to-server interactions. This scenario leverages the OAuth 2.0 Client Credentials Authorization Grant flow. No user consent is required for this scenario because the intention of the service account scenario is it to access its own data. Google also provides the capability for the service account to be delegated the right to impersonate users within a G-Suite subscription. I’ll be using that capability for this demonstration.

Back to the demo…

Now that my application is registered, I now need to generate credentials I can use for the service account scenario. For that I navigate to the Google API Console. After successfully authenticating, I’ll be brought to the dashboard for the application project I created in the previous steps. On this page I’ll click the Credentials menu item.


The credentials screen displays the client IDs associated with the JOG-NET-CONSOLE project.  Here we see the client ID I received in the JSON file as well as a default one Google generated when I created the project.


Next up I click the Create Credentials button and select the Service Account key option.  On the Create service account key page I provide a unique name for the service account of Jog Directory Access.

The Role drop down box relates to the new roles that were introduced with Google’s Cloud IAM.

You can think of Google’s Cloud IAM as Google’s version of Amazon Web Services (AWS) IAM  or Microsoft’s Azure Active Directory in how the instance is related to the project which is used to manage the GCP resources.  When a new service account is created a new security principal representing the non-human identity is created in the Google Cloud IAM instance backing the project.

Since my application won’t be interacting with GCP resources, I’ll choose the random role of Logs Viewer.  When I filled in the service account name the service account ID field was automatically populated for me with a value.  The service account ID is unique to the project and represents the security principal for the application.   I choose the option to download the private key as a PKCS12 file because I’ll be using the System.Security.Cryptography.X509Certificates namespace within my application later on.  Finally I click the create button and download the PKCS12 file.


The new service account now shows in the credential page.



Navigating to the IAM & Admin dashboard now shows the application as a security principal within the project.


I now need to enable the APIs in my project that I want my applications to access.  For this I navigate to the API & Services dashboard and click the Enable APIs and Services link.


On the next page I use “admin” as my search term, select the Admin SDK and click the Enable button.  The API is now enabled for applications within the project.

From here I navigate down to the Service accounts page and edit the newly created service account.


At this point I’ve created a new project in GCP, created a service account that will represent the demo application, and have given that application the right to impersonate users in my G-Suite directory.  I now need to authorize the application to access the G-Suite’s data via Google’s API.  For that I switch over to the G-Suite Admin Console and authenticate as a super admin and access the Security dashboard.  From there I hit the Advanced Setting option and click the Manage API client access link.


On the Manage API client access page I add a new entry using the client ID I pulled previously and granting the application access to the scope.  This allows the application to impersonate a user to pull a listing of users from the G-Suite directory.


Whew, a lot of new concepts to digest in this entry so I’ll save the review of the application for the next entry.  Here’s a consumable diagram I put together showing the relationship between GCP Projects, G-Suite, and a GCP Organization.  The G-Suite domain acts as a link to the GCP projects.  The G-Suite users can setup GCP projects and have a stub identity (see my first entry LINK) provisioned in the project.  When an service account is created in a project and granted G-Suite Domain-wide Delegation, we use the Client ID associated with the service account to establish an identity for the app in the G-Suite domain which is associated with a scope of authorized access.


In this post I covered some basic GCP concepts and saw that the concepts are very similar to both Microsoft and AWS.  I also covered the process to create a service account in GCP and how all the pieces come together to programmatic access to G-Suite resources.  In my next entry I’ll demo some simple .NET applications and walk through the code.

Have a great weekend and go Pats!