Capturing Azure Management Group Activity Logs Using Azure Automation – Part 1

Capturing Azure Management Group Activity Logs Using Azure Automation – Part 1

Hello again fellow geeks!

Over the past few months I’ve been working with a customer who is just beginning their journey into the cloud.  We’ve had a ton of great conversations around security, governance, and operationalizing Microsoft Azure.  We recently finalized the RACI and identified the controls required by both their internal security policy and their industry compliance requirements.  With those two items complete, we put together our Azure RBAC model and narrowed down the Azure Policies we needed to put in place to satisfy our compliance controls.

After a lot of discussion about the customer’s organization, its geographical locations, business unit makeup, and how its developers and central IT operate, we came up with a subscription model.  This customer had decided on an Azure subscription model where each workload would exist in its own subscription.  Further, each workload’s production and non-production environment would be segmented in different subscriptions.  Keeping each workload in a different subscription ensures no workload will compete for resources with other workloads and hit any subscription limits.  Additionally, it allowed the customer to very easily track the costs associated with each workload.

Now why did we use separate production and non-production subscriptions for each workload?  One reason is to address the same risk as above where a non-production workload could potentially consume all resources within a subscription impacting a production workload.  The other more critical reason is it makes it easier for us to apply different governance and access controls on production workloads vs non-production workloads.  The way we do this is through the usage of Azure Management Groups.

Management Groups were introduced into general availability back in late 2018 to help address the challenges organizations were having operating subscriptions at scale.  They provided a hierarchal method to apply governance and access controls across a collection of subscriptions.  For those of you familiar with AWS, Management Groups are somewhat similar to AWS Organizations and Organizational Units.  For my fellow Windows AD peeps, you can think of Management Groups somewhat like the Active Directory container and organizational unit hierarchy in an Active Directory domain where you apply different access control entries and group policy at high levels in the OU hierarchy that is then enforced and inherited down to the children.  Management Groups work in a similar manner in that the Azure RBAC definitions and assignments and Azure Policy you assign to the parent Management Groups are inherited down into the children.

Every Azure AD tenant starts with a top-level management group called the tenant root group.  Additional management groups created within the tenant are children of the group up to a maximum of 10,000 management groups and up to six levels of depth.  Any RBAC assignment or Azure Policy assigned to the tenant root group applies to all children management group in the tenant.  It’s important to understand that Management Groups are a resource within the Azure AD tenant and not a resource of an Azure subscription.  This will matter for reasons we’ll see later.

The tenant root management group can only be administered by a Global Admin by default and even this requires a configuration change in the tenant.  The method is describe here and what it does is places the global administrator performing the action in the User Access Administrator RBAC role at the root of scope.  Once that is complete, the name of the root management group could be changed, role assignments created, or policy assigned.

Screen Shot 2019-10-17 at 9.59.59 PM

Administering Tenant Root Group

Now there is one aspect of Management Groups that is a bit funky.  If you’re very observant you probably noticed the menu option below.

Screen Shot 2019-10-17 at 9.59.59 PM.png

That’s right folks, Management Groups have their own Activity Log.  Every action you perform at the management group scope such creating an Azure RBAC role assignment or assigning or un-assigning an Azure Policy is captured in this Activity Log.  Now as of today, the only way to access these logs is viewing them through the portal or through the Azure REST API.  Unlike the Activity Logs associated with a subscription, there isn’t native integration with Event Hubs or Azure Storage.  Don’t be fooled by the Export To Event Hub link seen in the screenshot below, this will simply send you to the standard menu where you would configure subscription Activity Logs to be exported.

Screen Shot 2019-10-17 at 10.34.19 PM

Now you could log into the GUI every day and export the logs to a CSV (yes that does work with Management Groups) but that simply isn’t scalable and also prevents you from proactively monitoring the logs.  So how do we deal with this gap while the product team works on incorporating the feature?  This will be the challenge we address in this series.

Over the next few posts I’ll walk through the solution I put together using Azure Automation Runbooks to capture these Activity Logs and send them to Azure Storage for retention and an Azure Log Analytics Workspace for analysis and monitoring using Azure Monitor.

Continue the series in my second post.

Tips and Tricks for Writing Azure Policy

Tips and Tricks for Writing Azure Policy

Hello geeks!

Over the past few weeks I’ve been working with a customer who has adopted the CIS (Center for Internet Security) controls framework.  CIS publishes a set of best practices and configurations called benchmarks for commonly used systems .  As you would expect there is a set of benchmarks for Microsoft Azure.  Implementing, enforcing, and auditing for compliance with the benchmarks can be a challenge.  Thankfully, this is where Azure Policy comes to the rescue.

Azure Policy works by evaluating the properties of resources (management plane right now minus a few exceptions) created in Azure either during deployment or for resources that have already been deployed.  This means you can stop a user from deploying a non-compliant resource vs addressing it after the fact.  This feature is value added for organizations that haven’t reached that very mature level of DevOps where all infrastructure is codified and pushed through a CI/CD pipeline that performs validation tests before deployment.

Policies are created in JSON format and contain five elements.  For the purposes of this blog post, I’ll be focusing on the policy rule element.  The other elements are straightforward and described fully in the official documentation.  The policy rule contains two sub elements, a logical evaluation and effect.  The logical evaluation uses simple if-then logic.  The if block contains one or more conditions with optional logical operators.  The if block will be where you spend much of your time (and more than likely frustration).

I would liken the challenge of learning how to construct working Azure Policy to the challenge presented writing good AWS IAM Policies.  The initial learning curve is high, but once you get a hang of it, you can craft works of art.  Unfortunately, unlike AWS IAM Policy, there are some odd quirks with Azure Policy right now that are either under documented or not documented.  Additionally, given how much newer Azure Policy is, there aren’t a ton of examples to draw from online for more complicated policies.

This brings us to the purpose of this blog.  While being very very very far from an expert (more like I’m barely passable) on Azure Policy, I have learned some valuable lessons from the past few weeks that I’ve been struggling through writing custom policies.  These are the lessons I want to pass on in hopes they’ll make your journey a bit easier.

    • Just because a resource alias exists, it doesn’t mean you can use it in a policy
      When you are crafting your conditions you’ll use fields which map to properties of Azure resources and describe their state.  There are a selection of fields that are supported, but one you’ll probably use often is the property alias.   You can pull a listing of property aliases using PowerShell, CLI, or the REST API.  Be prepared to format the output because some namespaces have a ton of properties.  I threw together a Python solution to pull the namespaces into a more consumable format.If you are using an alias that is listed but your Policy fails to do what you want it to do, it could be that while the alias exists, it’s not accessible by policy during an evaluation.  If the property belongs to a namespace that contains a property that is sensitive (like a secret) it will more than likely not be accessibly by Policy and hence won’t be caught.  The general rule I follow is if the namespace’s properties aren’t accessible with the Reader Azure RBAC role, policy evaluations won’t pick them up.A good example of this is the authsettings namespace under the Microsoft.Web/sites/config.  Say for example you wanted to check to see if the Web App was using FaceBook as an identity provider, you wouldn’t be able to use policy to check whether or not facebookAppId was populated.
    • Resource Explorer, Azure ARM Template Reference, and Azure REST API Reference are your friends, use them
      When you’re putting together a new policy make sure to use Azure Resource Explorer, Azure ARM Template Reference, and Azure REST API Reference.  The ARM Template Reference is a great tool to use when you are crafting a new policy because it will give you an idea of the schema of the resource you’ll be evaluating.  The Azure REST API Reference is useful when the description of a property is less than stellar in the ARM Template Reference (happens a lot).  Finally, the Azure Resource Explorer is an absolute must when troubleshooting a policy.A peer and I ran into a quirk when authoring a policy to evaluate the runtime of an Azure Web App.  In this instance Azure Web Apps running PHP on Windows were populating the PHP runtime in the phpVersion property while Linux was populating it in the linuxFxVersion property.  This meant we had to include additional logic in the policy to detect the runtimes based on the OS.  Without using Resource Explorer we would never have figured that out.
    • Use on-demand evaluations when building new policies
      Azure Policy evaluations are triggered based upon the set of the events described in this link.  The short of it is unless you want to wait 30 minutes after modifying or assigning a new policy, you’ll want to trigger an on-demand evaluation.  At this time this can only be done with a call to an Azure REST API endpoint.  I’m unaware of a built-in method to do this with Azure CLI or PowerShell.Since I have a lot of love for my fellow geeks, I put together a Python solution you can use to trigger evaluation.  Evaluations take anywhere between 5-10 minutes.  It seems like this takes longer the more policies you have, but that could simply be in my head.
    • RTFM.
      Seriously, read the public documentation.  Don’t jump into this service without spending an hour reading the documentation.  You’ll waste hours and hours of time smashing your head against the keyboard.  Specifically, read through this page to understand how processing across arrays works.  When you first start playing with Azure Policy, you’ll come across policies with double-negatives that will confuse the hell out of you.  Read that link and walk through policies like this one.  You can thank me later.
    • Explore the samples and experiment with them.
      Microsoft has published a fair amount of sample policies in the Azure Policy repo, the built-in policies and initiatives included in the Azure Portal, and the policy samples in the documentation.  I’ve thrown together a few myself and am working on others, so feel free to use them as you please.

Hope the above helps some of you on your journey to learning Azure Policy.  It’s a tool with a ton of potential and will no doubt improve over time.  One of the best ways to help it evolve is to contribute.  If you have some kick ass policies, submit them to get them published to the Azure Policy repo and to give back to the wider community.

Have a great week folks!

The Evolution of AD RMS to Azure Information Protection – Part 5 – Client-Side Migration and Testing

The Evolution of AD RMS to Azure Information Protection – Part 5 – Client-Side Migration and Testing

Welcome to the fifth entry in my series on the evolution of Microsoft’s Active Directory Rights Management Service (AD RMS) to Azure Information Protection (AIP).  We’ve covered a lot of material over this series.  It started with an overview of the service, examined the different architectures, went over key planning decisions for the migration from AD RMS to AIP, and left off with performing the server-side migration steps.  In this post we’re going to round out the migration process by performing a staged migration of our client machines.

Before we jump into this post, I’d encourage you to refresh yourself with my lab setup and the users and groups I’ve created, and finally the choices I made in the server side migration steps.  For a quick reference, here is the down and dirty:

  • Windows Server 2016 Active Directory forest named GEEKINTHEWEEDS.COM with servers running Active Directory Domain Services (AD DS), Active Directory Domain Name System (AD DNS), Active Directory Certificate Services (AD CS), Active Directory Federation Services (AD FS), Active Directory Rights Management Services (AD RMS), Azure Active Directory Connect, and Microsoft SQL Server Express.
  • Forest is configured to synchronize to Azure AD using Azure AD Connect and uses federated authentication via AD FS
  • Users Jason Voorhies and Ash Williams will be using a Windows 10 client machine with Microsoft Office 2016 named GWCLIENT1
  • Users Theodore Logan and Michael Myers will be using a Windows 10 client machine with Microsoft Office 2016 named GWCLIENT2
  • Users Jason Voorhies and Theodore Logan are in the Information Technology Windows Active Directory (AD) group
  • Users Ash Williams and Michael Myers are in the Accounting Windows AD group
  • Onboarding controls have been configured for a Windows AD group named GIW AIP Users of which Jason Voorhies and Ash Williams are members

Prepare The Client Machine

To take advantage of the new features AIP brings to the table we’ll need to install the AIP client. I’ll be installing the AIP client on GWCLIENT1 and leaving the RMS client installed by Office 2016 on GWCLIENT2. Keep in mind the AIP client includes the RMS client (sometimes referred to as MSIPC) as well.

If you recall from my last post, I skipped a preparation step that Microsoft recommended for client machines. The step has you download a ZIP containing some batch scripts that are used for performing a staged migration of client machines and users. The preparation script Microsoft recommends running prior to any server-side configuration Prepare-Client.cmd.  In an enterprise environment it makes sense but for this very controlled lab environment it wasn’t needed prior to server-side configuration. It’s a simple script that modifies the client registry to force the RMS client on the machines to go to the on-premises AD RMS cluster even if they receive content that’s been protected using an AIP subscription. If you’re unfamiliar with the order that the MSIPC client discovers an AD RMS cluster I did an exhaustive series a few years back.  In short, hardcoding the information to the registry will prevent the client from reaching out to AIP and potentially causing issues.

As a reminder I’ll be running the script on GIWCLIENT1 and not on GIWCLIENT2.  After the ZIP file is downloaded and the script is unpackaged, it needs to be opened with a text editor and the OnPremRMSFQDN and CloudRMS variables need to be set to your on-premises AD RMS cluster and AIP tenant endpoint. Once the values are set, run the script.

5AIP1.png

Install the Azure Information Protection Client

Now that the preparation step is out of the way, let’s get the AIP client installed. The AIP client can be downloaded directly from Microsoft. After starting the installation you’ll first be prompted as to whether you want to send telemetry to Microsoft and use a demo policy.  I’ll be opting out of both (sorry Microsoft).

5AIP2.png

After a minute or two the installation will complete successfully.

5AIP3.png

At this point I log out of the administrator account and over to Jason Voorhies. Opening Windows Explorer and right-clicking a text file shows we now have the classify and protect option to protect and classify files outside of Microsoft Office.

5AIP4.png

Testing the Client Machine Behavior Prior to Client-Side Configuration

I thought it would be fun to see what the client machine’s behavior would be after the AIP Client was installed but I hadn’t finished Microsoft’s recommended client-side configuration steps. Recall that GIWCLIENT1 has been previously been bootstrapped for the on-premises AD RMS cluster so let’s reset the client after observing the current state of both machines.

Notice on GWICLIENT1 the DefaultServer and DefaultServerUrl in the HKCU\Software\Microsoft\Office\16.0\Common\DRM do not exist even though the client was previously bootstrapped for the on-premises AD RMS instance. On GIWCLIENT2, which has also been bootstrapped, has the entries defined.

5AIP5.png

I’m fairly certain AIP cleared these out when it tried to activate when I started up Microsoft Word prior to performing these steps.

Navigating to HKCU\Software\Classes\Local Settings\Software\Microsoft\MSIPC shows a few slight differences as well. On GIWCLIENT1 there are two additional entries, one for the discovery point for Azure RMS and one for JOG.LOCAL’s AD RMS cluster. The JOG.LOCAL entry exists on GIWCLIENT1 and not on the GIWCLIENT2 because of the baseline testing I did previously.

5AIP6.png

5AIP7.png

Let’s take a look at the location the RMS client stores its certificates which is %LOCALAPPDATA%\Microsoft\MSIPC.  On both machines we see the expected copy of the public-key CLC certificate, the machine certificate, RAC, and use licenses for documents that have been opened.  Notice that even though the AD RMS cluster is running in Cryptographic Mode 1, the machine still generates a 2048-bit key as well.

5AIP8.png

5AIP9

Now that the RMS Client is reset on GIWCLIENT1, let’s go ahead and see what happens the RMS client tries to do a fresh activation after having AIP installed but the client-side configuration not yet completed.

After opening Microsoft Word I select to create a new document. Notice that the labels displayed in the AIP bar include a custom label I had previously defined in the AIP blade.

5AIP10.png

I then go back to the File tab on the ribbon and attempt to use the classic way of protecting a document via the Restrict Access option.

5AIP11.png

After selecting the Connect to Rights Management Servers and get templates option the client successfully bootstraps back to the on-premises AD RMS cluster as can be seen from the certificates available to the client and that all necessary certificates were re-created in the MISPC directory.

5AIP12

5AIP13.png

That’s Microsoft Office, but what about the scenario where I attempt to use the AIP client add in for Windows Explorer?

To test this behavior I created a PDF file named testfile.pdf.  Right-clicking and selecting the Classify and protect option opens the AIP client to display the default set of labels as well as a new GIW Accounting Confidential label.

5AIP14.png

If I select that label and hit Apply I receive the error below.

5AIP15.png

The template can’t be found because the client is trying to pull it from my on-premises AD RMS cluster.  Since I haven’t run the scripts to prepare the client for AIP, the client can’t reach the AIP endpoints to find the template associated with the label.

The results of these test tell us two things:

  1. Installing the AIP Client on a client machine that already has Microsoft Office installed and configured for an on-premises AD RMS cluster won’t break the client’s integration with that on-premises cluster.
  2. The AIP client at some point authenticated to the Geek In The Weeds Azure AD tenant and pulled down the classification labels configured for my tenant.

In my next post I’ll be examining these findings more deeply by doing a deep dive of the client behavior using a combination of procmon, Fiddler, and WireShark to analyze the AIP Client behavior.

Performing Client-Side Configuration

Now that the client has been successfully installed we need to override the behavior that was put in place with the Prepare-Client batch file earlier.  If we wanted to redirect all clients across the organization that were using Office 2016, we could use the DNS SRV record option listed in the migration article.  This option indicates Microsoft has added some new behavior to the RMS Client installed with Office 2016 such that it will perform a DNS lookup of the SRV record to see a migration has occurred.

For the purposes of this lab I’ll be using the Microsoft batch scripts I referenced earlier.  To override the behavior we put in place earlier with the Prepare-Client.cmd batch script, we’ll need to run both the Migrate-Client and Migrate-User scripts.  I created a group policy object (GPO) that uses security filtering to apply only to GIWCLIENT1 to run the Migrate-Client script as a Startup script and a GPO that uses security filtering to apply only to GIW AIP Users group which runs the Migrate-User script as a Login script.  This ensures only GIWCLIENT2 and Jason Voorhies and Ash Williams are affected by the changes.

You may be asking what do the scripts do?  The goal of the two scripts are to ensure the client machines the users log into point the users to Azure RMS versus an on-premises AD RMS cluster.  The scripts do this by adding and modifying registry keys used by the RMS client prior to the client searching for a service connection point (SCP).  The users will be redirected to Azure RMS when protecting new files as well as consuming files that were previously protected by an on-premises AD RMS cluster.  This means you better had performed the necessary server-side migration I went over previously, or else your users are going to be unable to consume previously protected content.

We’ll dig more into AIP/Office 2016 RMS Client discovery process in the next post.

Preparing Azure Information Protection Policies

Prior to testing the whole package, I thought it would be fun to create some AIP policies. By default, Microsoft provides you with a default AIP policy called the Global Policy. It comes complete with a reasonably standard set of labels, with a few of the labels having sublabels that have protection in some circumstances. Due to the migration path I undertook as part of the demo, I had to enabled protection for All Employees sublabels of both the Confidential and Highly Confidential labels.

5AIP16.png

In addition to the global policy, I also created two scoped policies. One scoped policy applies to users within the GIW Accounting group and the other applies to users within the GIW Information Technology group. Each policy introduces another label and sublabel as seen in the screenshots below.

5AIP17.png

5AIP18.png

Both of the sublabels include protection restricting members of the relevant groups to the Viewer role only. We’ll see these policies in action in the next section.

Testing the Client

Preparation is done, server-side migration has been complete, and our test clients and users have now been completed the documented migration process. The migration scripts performed the RMS client reset so no need to repeat that process.

For the first test, let’s try applying protection to the testfile.txt file I created earlier. Selecting the Classify and protect option opens up the AIP Client and shows me the labels configured in my tenant that support classification and protection. Recall from the AIP Client limitations different file types have different limitations. You can’t exactly append any type of metadata to content of a text file now can you?

5AIP19.png

 

Selecting the IT Staff Only sublabel of the GIW IT Staff label and hitting the apply button successfully protects the text file and we see the icon and file type for the file changes.  Opening the file in Notepad now displays a notice the file is protected and the data contained in the original file has been encrypted.

5AIP20.png

We can also open the file with the AIP Viewer which will decrypt the document and display the content of the text file.

5AIP24

Next we test in Microsoft Word 2016 by creating a new document named AIP_GIW_ALLEMP and classifying it with the High Confidential All Employees sublabel.  The sublabel adds protection such that all users in the GIW Employees group have Viewer rights.

5AIP21

5AIP22

Opening the AIP_GIW_ALLEMP Word document that was protected by Jason Voorhies is successful and it shows Ash Williams has viewer rights for the file.

5AIP23.png

Last but not least, let’s open the a document we previously protected with AD RMS named GIW_GIWALL_ADRMS.DOCX.  We’re able to successfully open this file because we migrated the TPD used for AD RMS up to AIP.

5AIP25.png

At this point we’ve performed all necessary steps up the migration.  What you have left now is cleanup steps and planning for how you’ll complete the rollout to the rest of your user base.  Not bad right?

Over the next few posts ‘ll be doing a deep dive of the RMS Client behavior when interacting with Azure Information Protection.   We’ll do some procmon captures to the behavior of the client when it’s performing its discovery process as well as examining the web calls it makes to Fiddler.  I’ll also spend some time examining the AIP blade and my favorite feature of AIP, Tracking and Revocation.

See you next time!

 

AWS and Microsoft’s Cloud App Security

AWS and Microsoft’s Cloud App Security

It seems like it’s become a weekly occurrence to have sensitive data exposed due to poorly managed cloud services.  Due to Amazon’s large market share with Amazon Web Services (AWS) many of these instances involve publicly-accessible Simple Storage Service (S3) buckets.  In the last six months alone there were highly publicized incidents with FedEx and Verizon.  While the cloud can be empowering, it can also be very dangerous when there is a lack of governance, visibility, and acceptance of the different security mindset cloud requires.

Organizations that have been in operation for many years have grown to be very reliant on the network boundary acting as the primary security boundary.  As these organizations begin to move to a software defined data center model this traditional boundary quickly becomes less and less effective.  Unfortunately for these organizations this, in combination with a lack of sufficient understanding of cloud, gives rise to mistakes like sensitive data being exposed.

One way in which an organization can protect itself is to leverage technologies such as cloud access security brokers (cloud security gateways if you’re Forrester reader) to help monitor and control data as it travels between on-premises and the cloud.  If you’re unfamiliar with the concept of a CASB, I covered it in a previous entry and included a link to an article which provides a great overview.

Microsoft has its own CASB offering called Microsoft Cloud App Security (CAS).  It’s offered as part of Microsoft’s Enterprise Mobility and Security (EMS) E5/A5 subscription.  Over the past several months multiple connectors to third party software-as-a-service (SaaS) providers have been introduced, including one for AWS.  The capabilities with AWS are limited at this point to pulling administrative logs and user information but it shows promise.

As per usual, Microsoft provides an integration guide which is detailed in button pushing, but not so much in concepts and technical details as to what is happening behind the scenes.  Since the Azure AD and AWS blog series has attracted so many views, I thought it would be fun and informative to do an entry for how Cloud App Security can be used with AWS.

I’m not in the habit of re-creating documentation so I’ll be referencing the Microsoft integration guide throughout the post.

The first thing that needs doing is the creation of a security principal in AWS Identity and Access Management (AWS IAM) that will be used by your tenant’s instance of CAS to connect to resources in your AWS account.   The first four steps are straightforward but step 5 could a bit of an explanation.

awscas1.pngHere we’re creating a custom IAM policy for the security principal granting it a number of permissions within the AWS account.  IAM policies are sets of permissions which are attached to a human or non-human identity or AWS resource and are evaluated when a call to the resource is made.  In the traditional on-premises world, you can think of it as something somewhat similar to a set of NTFS file permissions.  When the policy pictured above is created the security principal is granted a set of permissions across all instances of CloudTrail, CloudWatch, and IAM within the account.

If you’re unfamiliar with AWS services, CloudTrail is a service which audits the API calls made to AWS resources.  Some of the information included in the events include the action taken, the resource the action was taken upon, the security principal that made the action, the date time, and source IP address of the security principal who performed the action.  The CloudWatch service allows for monitoring of metrics and optionally triggering events based upon metrics reaching specific thresholds.  The IAM service is AWS’s identity store for the cloud management layer.

Now that we have a basic understanding of the services, let’s look at the permissions Microsoft is requiring for CAS to do its thing.  The CloudTrail permissions of DescribeTrails, LookupEvents, and GetTrailStatus allow CAS to query for all trails enabled on an AWS account (CloudTrail is enabled by default on all AWS resources), lookup events in a trail, and get information about the trail such as start and stop logging times.  The CloudWatch permissions of Describe* and Get* are fancy ways of asking for  READ permissions on CloudWatch resources.  These permissions include describe-alarms-history, describe alarms, describe-alarms-for-metric, get-dashboard, and get-metric-statistics.  The IAM permissions are similar to what’s being asked for in CloudWatch, basically asking for full read.

Step number 11 instructs us to create a new CloudTrail trail.  AWS by default audits all events across all resources and stores them for 90 days.  Trails enable you to direct events captured by CloudTrail to an S3 bucket for archiving, analysis, and responding to events.

awscas2.png

The trail created is consumed by CAS to read the information captured via CloudTrail.  The permissions requested above become a bit more clear now that we see CAS is requesting read access for all trails across an account for monitoring goodness.  I’m unclear as to why CAS is asking for read for CloudWatch alarms unless it has some integration in that it monitors and reports on alarms configured for an AWS account.  The IAM read permissions are required so it can pull user  information it can use for the User Groups capability.

After the security principal is created and a sample trail is setup, it’s time to configure the connector for CAS.  Steps 12 – 15 walk through the process.  When it is complete AWS now shows as a connected app.

awscas3.png

After a few hours data will start to trickle in from AWS.  Navigating to the Users and Accounts section shows all of the accounts found in the IAM instance for my AWS account.  Envision this as your one stop shop for identifying all of the user accounts across your many cloud services.  A single pane of glass to identity across SaaS.

awscas4.png

On the Activity Log I see all of the API activities captured by CloudTrail.  If I wanted to capture more audit information, I can enable CloudTrail for the relevant resource and point it to the trail I configured for CAS.  I haven’t tested what CAS does with multiple trails, but based upon the permissions we configured when we setup the security principal, it should technically be able to pull from any trail we create.

awscas7.png

Since the CAS and AWS integration is limited to pulling logging information, lets walk through an example of how we could use the data.  Take an example where an organization has a policy that the AWS root user should not be used for administrative activities due to the level of access the account gets by default.  The organization creates AWS IAM users accounts for each of its administrators who administer the cloud management layer.  In this scenario we can create a new policy in CAS to detect and alert on instances where the AWS root user is used.

First we navigate to the Policies page under the Control section of CAS.

awscas6.png

On the Policies page we’re going to choose to create a new policy settings in the image below.  We’ll designate this as a high severity privileged account alert.  We’re interested in knowing anytime the account is used so we choose the Single Activity option.

awscas7

We’ll pretend we were smart users of CAS and let it collect data for a few weeks to get a sampling of the types of events which are captured and to give us some data to analyze.  We also went the extra mile and leveraged the ability of CAS to pull in user information from AWS IAM such that we can choose the appropriate users from the drop-down menus.

Since this is a demo and my AWS lab has a lot of activity by the root account we’re going to limit our alerts to the creation of new AWS IAM users.  To do that we set our filter to look for an Activity type equal to Create user.  Our main goal is to capture usage of the root account so we add another filter rule that searches for a User with the name equal to aws root user where it is the actor in an event.

awscas8.png

Finally we configure the alert to send an email to the administrator when the event occurs.  The governance capabilities don’t come into play in this use case.

awscas9

Next we jump back to AWS and create a new AWS IAM user named testuser1.  A few minutes after the user is created we see the event appearing in CloudTrail.

awscas10

After a few minutes, CAS generates and alert and I receive an email seen in the image below.   I’m given information as to the activity, the app, the date and time it was performed, and the client’s IP address.

awscas11.png

If I bounce back to CAS I see one new Alert.  Navigating to the alert I’m able to dismiss it, adjust the policy that generated it, or resolve it and add some notes to the resolution.

awscas12.png

I also have the option to dig deeper to see some of the patterns of the user’s behavior or the pattern of the behaviors from a specific IP address as seen below.

awscas13.png

awscas14.png

All this information is great, but what can we do with it?  In this example, it delivers visibility into the administrative activities occurring at the AWS cloud management layer by centralizing the data into a single repository which I can then send other data such as O365 activity, Box, SalesForces, etc.  By centralizing the information I can begin doing some behavioral analytics to develop standard patterns of behavior of my user base.  Understanding standard behavior patterns is key to being ahead of the bad guys whether they be insiders or outsiders.  I can search for deviations from standard patterns to detect a threat before it becomes too widespread.  I can also be proactive about putting alerts and enforcement (available for other app connectors in CAS but not AWS at this time) to stop the behavior before the threat is realized.  If I supplemented this data with log information from my on-premises proxy via Cloud App Discovery, I get an even larger sampling improving the quality of the data as well as giving me insight into shadow IT.  Pulling those “shadow” cloud solutions into the light allow me to ensure the usage of the services complies with organizational policies and opens up the opportunity of reducing costs by eliminating redundant services.

Microsoft categorizes the capabilities that help realize these benefits as the Discover and Investigate capabilities of CAS. The solution also offers a growing number of enforcement mechanisms (Microsoft categorized these enforcement mechanisms as Control) which add a whole other layer of power behind the solution.  Due to the limited integration with AWS I can’t demo those capabilities with this post.  I’ll cover those in a future post.

I hope this post helped you better understand the value that CASB/CSGs like Microsoft’s Cloud App Security can bring to the table.  While the product is still very new and a bit sparse on support with 3rd party applications, the list is growing every day. I predict the capabilities provided by technology such as Microsoft’s Cloud App Security will be as standard to IT as a firewall in the years to come.  If you’re already in Office 365 you should be ensuring you integrate these capabilities into your arsenal to understand the value they can bring to your business.

Thanks and have a wonderful week!