Capturing and Visualizing Office 365 Security Logs – Part 2

Capturing and Visualizing Office 365 Security Logs – Part 2

Hello again my fellow geeks.

Welcome to part two of my series on visualizing Office 365 security logs.  In my last post I walked through the process of getting the sign-in and security logs and provided a link to some Lambda’s I put together to automate pulling them down from Microsoft Graph.  Recall that the Lambda stores the files in raw format (with a small bit of transformation on the time stamps) into Amazon S3 (Simple Storage Service).  For this demonstration I modified the parameters for the Lambda to download the 30 days of the sign-in logs and to store them in an S3 bucket I use for blog demos.

When the logs are pulled from  Microsoft Graph they come down in JSON (JavaScript Object Notation) format.  Love JSON or hate it is the common standard for exchanging information these days.  The schema for the JSON representation of the sign-in logs is fairly complex and very nested because there is a ton of great information in there.  Thankfully Microsoft has done a wonderful job of documenting the schema.  Now that we have the logs and the schema we can start working with the data.

When I first started this effort I had put together a Python function which transformed the files into a CSV using pipe delimiters.  As soon as I finished the function I wondered if there was an alternative way to handle it.  In comes Amazon Athena to the rescue with its Openx-JsonSerDe library.  After reading through a few blogs (great AWS blog here), StackOverflow posts, and the official AWS documentation I was ready to put something together myself.  After some trial and error I put together a working DDL (Data Definition Language) statement for the data structure.  I’ve made the DDLs available on Github.

Once I had the schema defined, I created the table in Athena.  The official AWS documentation does a fine job explaining the few clicks that are provided to create a table, so I won’t re-create that here.  The DDLs I’ve provided you above will make it a quick and painless process for you.

Let’s review what we’ve done so far.  We’ve setup a reoccurring job that is pulling the sign-in and audit logs via the API and is dumping all that juicy data into cheap object storage which we can further enforce lifecycle policies for.  We’ve then defined the schema for the data and have made it available via standard SQL queries.  All without provisioning a server and for pennies on the dollar.  Not to shabby!

At this point you can use your analytics tool of choice whether it be QuickSight, Tableau, PowerBi, or the many other tools that have flooded the market over the past few years.  Since I don’t make any revenue from these blog posts, I like to go the cheap and easy route of using Amazon QuickSight.

After completing the initial setup of QuickSight I was ready to go.  The next step was to create a new data set.  For that I clicked the Manage Data button and selected New Data Set.

Screen Shot 2019-01-31 at 8.57.15 PM.png

On the Create a Data Set screen I selected the Athena option and created a name for the data source.

screenshot2019-01-31at9.01.48pm

From there I selected the database in Athena which for me was named azuread.  The tables within the database are then populated and I chose the tbl_signin_demo which points to the test S3 bucket I mentioned previously.

Screen Shot 2019-01-31 at 9.04.22 PM.png

Due to the complexity of the data structure I opted to use a custom SQL query.  There is no reason why you couldn’t create the table I’m about to create in Athena and then connect to that table instead to make it more consumable for a wider array of users.  It’s really up to you and I honestly don’t know what the appropriate “big data” way of doing it is.  Either way, those of you with real SQL skills may want to look away from this query lest you experience a Raiders of The Lost Ark moment.

indianjones

You were warned.

SELECT records.id, records.createddatetime, records.userprincipalname, records.userDisplayName, records.userid, records.appid, records.appdisplayname, records.ipaddress, records.clientappused, records.mfadetail.authdetail AS mfadetail_authdetail, records.mfadetail.authmethod AS mfadetail_authmethod, records.correlationid, records.conditionalaccessstatus, records.appliedconditionalaccesspolicy.displayname AS cap_displayname, array_join(records.appliedconditionalaccesspolicy.enforcedgrantcontrols,' ') AS cap_enforcedgrantcontrols, array_join(records.appliedconditionalaccesspolicy.enforcedsessioncontrols,' ') AS cap_enforcedsessioncontrols, records.appliedconditionalaccesspolicy.id AS cap_id, records.appliedconditionalaccesspolicy.result AS cap_result, records.originalrequestid, records.isinteractive, records.tokenissuername, records.tokenissuertype, records.devicedetail.browser AS device_browser, records.devicedetail.deviceid AS device_id, records.devicedetail.iscompliant AS device_iscompliant, records.devicedetail.ismanaged AS device_ismanaged, records.devicedetail.operatingsystem AS device_os, records.devicedetail.trusttype AS device_trusttype,records.location.city AS location_city, records.location.countryorregion AS location_countryorregion, records.location.geocoordinates.altitude, records.location.geocoordinates.latitude, records.location.geocoordinates.longitude,records.location.state AS location_state, records.riskdetail, records.risklevelaggregated, records.risklevelduringsignin, records.riskstate, records.riskeventtypes, records.resourcedisplayname, records.resourceid, records.authenticationmethodsused, records.status.additionaldetails, records.status.errorcode, records.status.failurereason  FROM "azuread"."tbl_signin_demo" CROSS JOIN (UNNEST(value) as t(records))

This query will de-nest the data and give you a detailed (possibly extremely large depending on how much data you are storing) parsed table. I was now ready to create some data visualizations.

The first visual I made was a geospatial visual using the location data included in the logs filtered to failed logins. Not surprisingly our friends in China have shown a real interest in my and my wife’s Office 365 accounts.

screenshot2019-01-31at9.26.24pm

Next up I was interested in seeing if there were any patterns in the frequency of the failed logins.  For that I created a simple line chart showing the number of failed logins per user account in my tenant.  Interestingly enough the new year meant back to work for more than just you and me.

screenshot2019-01-31at9.28.45pm

Like I mentioned earlier Microsoft provides a ton of great detail in the sign-in logs.  Beyond just location, they also provide reasons for login failures.  I next created a stacked bar chat to show the different reasons for failed logs by user.  I found the blocked sign-ins by malicious IPs interesting.  It’s nice to know that is being tracked and taken care of.

screenshot2019-01-31at9.31.24pm

Failed logins are great, but the other thing I was interested in is successful logins and user behavior.  For this I created a vertical stacked bar chart that displayed the successful logins by user by device operating system (yet more great data captured in the logs).  You can tell from the bar on the right my wife is a fan of her Mac!

screenshot2019-01-31at9.38.02pm

As I gather more data I plan on creating some more visuals, but this was great to start.  The geo-spatial one is my favorite.  If you have access to a larger data set with a diverse set of users your data should prove fascinating.  Definitely share any graphs or interesting data points you end up putting together if you opt to do some of this analysis yourself.  I’d love some new ideas!

That will wrap up this series.  As you’ve seen the modern tool sets available to you now can do some amazing things for cheap without forcing you to maintain the infrastructure behind it.  Vendors are also doing a wonderful job providing a metric ton of data in their logs.  If you take the initiative to understand the product and the data, you can glean some powerful information that has both security and business value.  Even better, you can create some simple visuals to communicate that data to a wide variety of audiences making it that much more valuable.

Have a great weekend!

 

Capturing and Visualizing Office 365 Security Logs – Part 1

Welcome back again my fellow geeks!

I’ve been busy over the past month nerding out on some pet projects.  I thought it would be fun to share one of those pet projects with you.  If you had a chance to check out my last series, I walked through my first Python experiment which was to write a re-usable tool that could be used to pull data from Microsoft’s Graph API (Microsoft Graph).

For those of you unfamiliar with Microsoft Graph, it’s the Restful API (application programming interface) that is used to interact with Microsoft cloud offerings such as Office 365 and Azure.  You’ve probably been interacting with it without even knowing it if through the many PowerShell modules Microsoft has released to programmatically interact with those services.

One of the many resources which can be accessed through Microsoft Graph are Azure AD (Active Directory) security and audit reports.  If you’re using Office 365, Microsoft Azure, or simply Azure AD as an identity platform for SSO (single sign-on) to third-party applications like SalesForce, these reports provide critical security data.  You’re going to want to capture them, store them, and analyze them.  You’re also going to have to account for the window that Microsoft makes these logs available.

The challenge is they are not available via the means logs have traditionally been captured on-premises by using syslogd, installing an SIEM agent, or even Windows Event Log Forwarding.  Instead you’ll need to take a step forward in evolving the way you’re used to doing things. This is what moving to the cloud is all about.

Microsoft allows you to download the logs manually via the Azure Portal GUI (graphical user interface) or capture them by programmatically interacting with Microsoft Graph.  While the former option may work for ad-hoc use cases, it doesn’t scale.  Instead we’ll explore the latter method.

If you have an existing enterprise-class SIEM (Security Information and Event Management) solution such as Splunk, you’ll have an out of box integration.  However, what if you don’t have such a platform, your organization isn’t yet ready to let that platform reach out over the Internet, or you’re interested in doing this for a personal Office 365 subscription?  I fell into the last category and decided it would be an excellent use case to get some experience with Python, Microsoft Graph, and take advantage of some of the data services offered by AWS (Amazon Web Services).   This is the use case and solution I’m going to cover in this post.

Last year I had a great opportunity to dig into operational and security logs to extract useful data to address some business problems.  It was my first real opportunity to examine large amounts of data and to create different visualizations of that data to extract useful trends about user and application behavior.  I enjoyed the hell out of it and thought it would be fun to experiment with my own data.

I decided that my first use case would be Office 365 security logs.  As I covered in my last series my wife’s Office 365 account was hacked.  The damage was minor as she doesn’t use the account for much beyond some crafting sites (she’s a master crocheter as you can see from the crazy awesome Pennywise The Clown she made me for Christmas).

img_4301

The first step in the process was determining an architecture for the solution.  I gave myself a few requirements:

  1. The solution must not be dependent on my home lab infrastructure
  2. Storage for the logs must be cheap and readily available
  3. The credentials used in my Python code needs to be properly secured
  4. The solution must be automated and notify me of failures
  5. The data needs to be available in a form that it can be examined with an analytics solution

Based upon the requirements I decided to go the serverless (don’t hate me for using that tech buzzword 🙂 ) route.  My decisions were:

  • AWS Lambda would run my code
  • Amazon CloudWatch Events would be used to trigger the Lambda once a day to download the last 24 hours of logs
  • Amazon S3 (Simple Storage Service) would store the logs
  • AWS Systems Manager Parameter Store would store the parameters my code used leveraging AWS KMS (Key Management Service) to encrypt the credentials used to interact with Microsoft Graph
  • Amazon Athena would hold the schema for the logs and make the data queryable via SQL
  • Amazon QuickSight would be used to visualize the data by querying Amazon Athena

The high level architecture is pictured below.

untitled

I had never done a Lambda before so I spent a few days looking at some examples and doing the typical Hello World that we all do when we’re learning something new.  From there I took the framework of Python code I put together for general purpose queries to the Microsoft Graph, and adapted it into two Lambdas.  One Lambda would pull Sign-In logs while the other would pull Audit Logs.  I also wanted a repeatable way to provision the Lambdas to share with others and get some CloudFormation practice and brush up on my very dusty Bash scripting.   The results are located here in one of my Github repos.

I’m going to stop here for this post because we’ve covered a fair amount of material.  Hopefully after reading this post you understand that you have to take a new tact with getting logs for cloud-based services such as Azure AD.  Thankfully the cloud has brought us a whole new toolset we can use to automate the extraction and storage of those logs in a simple and secure manner.

In my next post I’ll walk through how I used Athena and QuickSight to put together some neat dashboards to satisfy my nerdy interests and get better insight into what’s happening on a daily basis with my Office 365 subscription.

See you next post and go Pats!

Exploring Azure AD Privileged Identity Management (PIM) – Part 1

Exploring Azure AD Privileged Identity Management (PIM) – Part 1

We’re going to take a break from Azure Information Protection and shift our focus to Azure Active Directory Privileged Identity Management (AAD PIM).

If you’ve ever had to manage an application, you’re familiar with the challenge of trying to keep a balance between security and usability when it comes to privileged access.  In many cases you’re stuck with users that have permanent membership in privileged roles because the impact to usability of the application is far too great to manage that access on an “as needed basis” or as we refer to it in the industry “just in time” (JIT).   If you do manage to remove that permanent membership requirement (often referred to as standing privileged access) you’re typically stuck with a complicated automation solution or a convoluted engineering solution that gives you security but at the cost of usability and increasing operational complexity.

Not long ago the privileged roles within Azure Active Directory (AAD), Office 365 (O365), and Azure Role-Based Access Control had this same problem.  Either a user was a permanent member of the privileged role or you had to string together some type of request workflow that interacted with the Graph API or triggered a PowerShell script.  In my first entry into Azure AD, I had a convoluted manual process which involved requests, approvals, and a centralized password management system.  It worked, but it definitely impacted productivity.

Thankfully Microsoft (MS) has addressed this challenge with the introduction of Azure AD Privileged Identity Management (AAD PIM).  In simple terms AAD PIM introduces the concept of an “eligible” administrator which allows you to achieve that oh so wonderful JIT.  AAD PIM is capable of managing a wide variety of roles which is another area Microsoft has made major improvements on.  Just a few years ago close to everything required being in the Global Admin role which was a security nightmare.

In addition to JIT, AAD PIM also provides a solid level of logging and analytics, a centralized view into what users are members of privileged roles, alerting around the usage of privileged roles, approval workflow capabilities (love this feature), and even provides an access review capability to help with access certification campaigns.  You can interact with AAD PIM through the Azure Portal, Graph API, or PowerShell.

To get JIT you’ll need an Azure Active Directory Premium P2 or Enteprise Mobility and Security E5 license.  Microsoft states that every use that benefits from the feature requires a license.  While this is a licensing requirement, it’s not technically enforced as we see in my upcoming posts.

You’re probably saying, “Well this is all well and good Matt, but there is nothing here I couldn’t glean from Microsoft documentation.”  No worries my friends, we’ll be using this series to walk to demonstrate the capabilities so you can see them in action.  I’ll also be breaking out my favorite tool Fiddler to take a look behind the scenes of how Microsoft manages to elevate access for the user after a privileged role has been activated.

 

The Evolution of AD RMS to Azure Information Protection – Part 7 – Deep Dive into cross Azure AD tenant consumption

The Evolution of AD RMS to Azure Information Protection – Part 7 – Deep Dive into cross Azure AD tenant consumption

Each time I think I’ve covered what I want to for Azure Information Protection (AIP), I think of another fun topic to explore.  In this post I’m going to look at how AIP can be used to share information with users that exist outside your tenant.  We’ll be looking at the scenario where an organization has a requirement to share protected content with another organization that has an Office 365 tenant.

Due to my requirements to test access from a second tenant, I’m going to supplement the lab I’ve been using.  I’m adding to the mix my second Azure AD tenant at journeyofthegeek.com.  Specific configuration items to note are as follows:

  • The tenant’s custom domain of journeyofthegeek.com is an Azure AD (AAD)-managed domain.
  • I’ve created two users for testing.  The first is named Homer Simpson (homer.simpson@journeyofthegeek.com) and the second is Bart Simpson (bart.simpson@journeyofthegeek.com).
  • Each user have been licensed with Office 365 E3 and Enterprise Mobility + Security E5 licenses.
  • Three mail-enabled security groups have been created.  The groups are named The Simpsons (thesimpsons@journeyofthegeek.com), JOG Accounting (jogaccounting@journeyofthegeek.com), and JOG IT (jogit@journeyofthegeek.com).
  • Homer Simpson is a member of The Simpsons and JOG Accounting while Bart Simpson is a member of The Simpsons and JOG IT.
  • Two additional AIP policies have been created in addition to the Global policy.  One policy is named JOG IT and one is named JOG Accounting.
  • The Global AIP policy has an additional label created named PII that enforces protection.  The label is configured to detect at least one occurrence of a US social security number.  The document is protection policy allows only members of the The Simpsons group to the role of viewer.
  • The JOG Accounting and JOG IT AIP policies have both been configured with an additional label of either JOG Accounting or JOG IT.  A sublabel for each label has also been created which enforces protection and restricts members of the relevant departmental group to the role of viewer.
  • I’ve repurposed the GIWCLIENT2 machine and have created two local users named Bart Simpson and Homer Simpson.

Once I had my tenant configuration up and running, I initialized Homer Simpson on GIWCLIENT2.  I already had the AIP Client installed on the machine, so upon first opening Microsoft Word, the same bootstrapping process I described in my previous post occurred for the MSIPC client and the AIP client.  Notice that the document has had the Confidential \ All Employees label applied to the document automatically as was configured in the Global AIP policy.  Notice also the Custom Permissions option which is presented to the user because I’ve enabled the appropriate setting in the relevant AIP policies.

7aip1.png

I’ll be restricting access to the document by allowing users in the geekintheweeds.com organization to hold the Viewer role.  The geekintheweeds.com domain is associated with my other Azure AD tenant that I have been using for the lab for this series of posts.  First thing I do is change the classification label from Confidential \ All Employees to General.  That label is a default label provided by Microsoft which has an RMS Template applied that restricts viewers to users within the tenant.

One interesting finding I discovered through my testing is that the user can go through the process of protecting with custom permissions using a label that has a pre-configured template and the AIP client won’t throw any errors, but the custom permissions won’t be applied.  This makes perfect sense from a security perspective, but it would be nice to inform the user with an error or warning.  I can see this creating unnecessary help desk calls with how it’s configured now.

When I attempt to change my classification label to General, I receive a prompt requiring me to justify the drop in classification.  This is yet another setting I’ve configured in my Global AIP policy.  This seems to be a standard feature in most data classification solutions from what I’ve observed in another major vendor.

7aip2.png

After successfully classifying the document with the General label protection is removed from the document. At this point I can apply my custom permissions as seen below.

7aip3.png

I repeated the process for another protected doc named jog_protected_for_Ash_Williams.docx with permissions restricted to ash.williams@geekintheweeds.com.  I packaged both files into an email and sent them to Ash Williams who is a user in the Geek In The Weeds tenant.  Keep in mind the users in the Geek In The Weeds tenant are synchronized from a Windows Active Directory domain and use federated authentication.

After opening Outlook the message email from Homer Simpson arrives in Ash William’s inbox.   At this point I copied the files to my desktop, closed Outlook, opened Microsoft Word and used the “Reset Settings” options of the AIP client, and signed out of my Office profile.

7aip4

At this point I started Fiddler and opened one of the Microsoft Word document. Microsoft Word pops-up a login prompt where I type in my username of ash.williams@geekintheweeds.com and I’m authenticated to Office 365 through the standard federated authentication flow. The document then pops open.

7aip5.png

Examining the Fiddler capture we see a lot of chatter. Let’s take a look at this in chunks, first addressing the initial calls to the AIP endpoint.

7aip6

If you have previous experience with the MSIPC client in the AD RMS world you’ll recall that it makes its calls in the following order:

  1. Searches HKLM registry hive
  2. Searches HKCU registry hive
  3. Web request to the RMS licensing pipeline for the RMS endpoint listed in the metadata attached to the protected document

In my previous deep dives into AD RMS we observed this behavior in action.  In the AIP world, it looks like the MSIPC client performs similarly.  The endpoint we see it first contacting is the Journey of the Geek which starts with 196d8e.

The client first sends an unauthenticated HTTP GET to the Server endpoint in the licensing pipeline. The response the server gives is a list of available SOAP functions which include GetLicensorCertificate and GetServerInfo as seen below.

7aip8.png

The client follows up the actions below:

  1. Now that the client knows the endpoint supports the GetServerInfo SOAP function, it sends an unauthenticated HTTP POST which includes the SOAP action of GetServerInfo.  The AIP endpoint returns a response which includes the capabilities of the AIP service and the relevant endpoints for certification and the like.
  2. It uses that information received from the previous request to send an unauthenticated HTTP POST which includes the SOAP action of ServiceDiscoveryForUser.  The service returns a 401.

At this point the client needs to obtain a bearer access token to proceed.  This process is actually pretty interesting and warrants a closer look.

7aip9

Let’s step through the conversation:

  1. We first see a connection opened to odc.officeapps.live.com and an unauthenticated HTTP GET to the /odc/emailhrd/getfederationprovider URI with query strings of geekintheweeds.com.  This is a home realm discovery process trying to the provider for the user’s email domain.

    My guess is this is MSAL In action and is allowing support for multiple IdPs like Azure AD, Microsoft Live, Google, and the like.  I’ll be testing this theory in a later post where I test consumption by a Google user.

    The server responds with a number of headers containing information about the token endpoints for Azure AD (since this is domain associated with an Azure AD tenant.)

    7aip10.png

  2. A connection is then opened to odc.officeapps.live.com and an unauthenticated HTTP GET to the /odc/emailhrd/getidp with the email address for my user ash.williams@geekintheweeds.com. The response is interesting in that I would have thought it would return the user’s tenant ID. Instead it returns a JSON response of OrgId.

    7aip11.png

    Since I’m a nosey geek, I decided to unlock the session for editing.  First I put in the email address associated with a Microsoft Live.  Instead of OrgId it returned MSA which indicates it detects it as being a Microsoft Live account.  I then plugged in a @gmail.com account to see if I would get back Google but instead I received back neither.  OrgId seems to indicate that it’s an account associated with an Azure AD tenant.  Maybe it would perform alternative steps depending on whether it’s MSA or Azure AD in future steps?  No clue.

  3. Next, a connection is made to oauth2 endpoint for the journeyofthegeek.com tenant. The machine makes an unathenticated requests an access token for the https://api.aadrm.com/ in order to impersonate Ash Williams. Now if you know your OAuth, you know the user needs to authenticate and approve the access before the access token can be issued. The response from the oauth2 endpoint is a redirect over to the AD FS server so the user can authenticate.

    7aip12.png

  4. After the user successfully authenticates, he is returned a security token and redirected back to login.microsoftonline.com where the assertion is posted and the user is successfully authenticated and is returned an authorization code.

    7aip13.png

  5. The machine then takes that authorization code and posts it to the oauth2 endpoint for my journeyofthegeek.com tenant. It receives back an Open ID Connect id token for ash.williams, a bearer access token, and a refresh token for the Azure RMS API.

    7aip14.png

    Decoding the bearer access token we come across some interesting information.  We can see the audience for the token is the Azure RMS API, the issuer of the token is the tenant id associated with journeyofthegeek.com (interesting right?), and the identity provider for the user is the tenant id for geekintheweeds.com.

    7aip15.png

  6. After the access token is obtained the machine closes out the session with login.microsoftonline.com and of course dumps a bunch of telemetry (can you see the trend here?).

    7aip16.png

  7. A connection is again made to odc.officeapps.live.com and the /odc/emailhrd/getfederationprovider URI with an unauthenticated request which includes a query string of geekintheweeds.com. The same process as before takes place.

Exhausted yet?  Well it’s about to get even more interesting if you’re an RMS nerd like myself.

7aip17.png

Let’s talk through the sessions above.

  1. A connection is opened to the geekintheweeds.com /wmcs/certification/server.asmx AIP endpoint with an unauthenticated HTTP POST and a SOAP action of GetServerInfo.  The endpoint responds as we’ve observed previously with information about the AIP instance including features and endpoints for the various pipelines.
  2. A connection is opened to the geekintheweeds.com /wmcs/oauth2/servicediscovery/servicediscovery.asmx AIP endpoint with an unauthenticated HTTP POST and a SOAP action of ServiceDiscoveryForUser.  We know from the bootstrapping process I covered in my last post, that this action requires authentication, so we see the service return a 401.
  3. A connection is opened to the geekintheweeds.com /wmcs/oauth2/certification/server.asmx AIP endpoint with an unauthenticated HTTP POST and SOAP action of GetLicensorCertificate.  The SLC and its chain is returned to the machine in the response.
  4. A connection is opened to the geekintheweeds.com /wmcs/oauth2/certification/certification.asmx AIP endpoint with an unauthenticated HTTP POST and SOAP action of Certify.  Again, we remember from my last post that this requires authentication, so the service again responds with a 401.

What we learned from the above is the bearer access token the client obtained earlier isn’t attended for the geekintheweeds.com AIP endpoint because we never see it used.  So how will the machine complete its bootstrap process?  Well let’s see.

  1. A connection is opened to the journeyofthegeek.com /wmcs/oauth2/servicediscovery/servicediscovery.asmx AIP endpoint with an unauthenticated HTTP POST and SOAP action of ServiceDiscoveryForUser.  The service returns a 401 after which the client makes the same connection and HTTP POST again, but this time including its bearer access token it retrieved earlier.  The service provides a response with the relevant pipelines for the journeyofthegeek.com AIP instance.
  2. A connection is opened to the journeyofthegeek.com /wmcs/oauth2/certification/server.asmx AIP endpoint with an authenticated (bearer access token) HTTP POST and SOAP action of GetLicensorCertificate.  The service returns the SLC and its chain.
  3. A connection is opened to the journeyofthegeek.com /wmcs/oauth2/certification/certification.asmx AIP endpoint with an authenticated (bearer access token) HTTP POST and SOAP action of Certify.  The service returns a RAC for the ash.williams@geekintheweeds.com along with relevant SLC and chain.  Wait what?  A RAC from the journeyofthegeek.com AIP instance for a user in geekintheweeds.com?   Well folks this is supported through RMS’s support for federation.  Since all Azure AD’s in a given offering (commercial, gov, etc) come pre-federated, this use case is supported.
  4. A connection is opened to the journeyofthegeek.com /wmcs/licensing/server.asmx AIP endpoint with an uauthenticated HTTP POST and SOAP action of GetServerInfo.  We’ve covered this enough to know what’s returned.
  5. A connection is opened to the journeyofthegeek.com /wmcs/licensing/publish.asmx AIP endpoint with an authenticated (bearer access token) HTTP POST and SOAP action of GetClientLicensorandUserCertificates.  The server returns the CLC and EUL to the user.

After this our protected document opens in Microsoft Word.

7aip18.png

Pretty neat right? Smart move by Microsoft to take advantage and build upon of the federated capabilities built into AD RMS. This is another example showing just how far ahead of their game the product team for AD RMS was. Heck, there are SaaS vendors that still don’t support SAML, let alone on-premises products from 10 years ago.

In the next few posts (can you tell I find RMS fascinating yet?) of this series I’ll explore how Microsoft has integrated AIP into OneDrive, SharePoint Online, and Exchange Online.

Have a great week!