The Evolution of AD RMS to Azure Information Protection – Part 1

The Evolution of AD RMS to Azure Information Protection – Part 1

Collaboration.  It’s a term I hear at least a few times a day when speaking to my user base.  The ability to seamlessly collaborate with team members, across the organization, with trusted partners, and with customers is a must.  It’s a driving force between much of the evolution of software-as-a-service collaboration offerings such as Office 365.  While the industry is evolving to make collaboration easier than ever, it’s also introducing significant challenges for organizations to protect and control their data.

In a recent post I talked about Microsoft’s entry into the cloud access security broker (CASB) market with Cloud App Security (CAS) and its capability to provide auditing and alerting on activities performed in Amazon Web Services (AWS).  Microsoft refers to this collection of features as the Investigate capability of CAS.  Before I cover an example of the Control features in action, I want to talk about the product that works behind the scenes to provide CAS with many of the Control features.

That product is Azure Information Protection (AIP) and it provides the capability to classify, label, and protect files and email.  The protection piece is provided by another Microsoft product, Azure Active Directory Rights Management Services (Azure RMS).  Beyond just encrypting a file or email, Azure RMS can control what a user can do with a file such as preventing a user from printing a document or forwarding an email.  The best part?  The protection goes with the data even when it leaves your security boundary.

For those of you that have read my blog you can see that I am a huge fanboy of the predecessor to Azure RMS, Active Directory Rights Management Services (AD RMS, previously Rights Management Service or RMS for you super nerds).  AD RMS has been a role available in Microsoft Windows Server since Windows Server 2003.  It was a product well ahead of its time that unfortunately never really caught on.  Given my love for AD RMS, I thought it would be really fun to do a series looking at how AIP has evolved from AD RMS.   It’s a dramatic shift from a rather unknown product to a product that provides capabilities that will be as standard and as necessary as Antivirus was to the on-premises world.

I built a pretty robust lab environment (two actually) such that I could demonstrate the different ways the solutions work as well as demonstrate what it looks to migrate from AD RMS to AIP.  Given the complexity of the lab environment,  I’m going to take this post to cover what I put together.

The layout looks like this:

 

1AIP1.png

On the modern end I have an Azure AD tenant with the custom domain assigned of geekintheweeds.com.  Attached to the tenant I have some Office 365 E5 and Enterprise Mobility + Security E5 trial licenses  For the legacy end I have two separate labs setup in Azure each within its own resource group.  Lab number one contains three virtual machines (VMs) that run a series of services included Active Directory Domain Services (AD DS), Active Directory Certificate Services (AD CS), AD RMS, and Microsoft SQL Server Express.  Lab number two contains four VMs that run the same set as services as Lab 1 in addition to Active Directory Federation Services (AD FS) and Azure Active Directory Connect (AADC).  The virtual network (vnet) within each resource group has been peered and both resource groups contain a virtual gateway which has been configured with a site-to-site virtual private network (VPN) back to my home Hyper-V environment.  In the Hyper V environment I have two workstations.

Lab 1 is my “legacy” environment and consists of servers running Windows 2008 R2 and Windows Server 2012 R2 (AD RMS hasn’t changed in any meaningful manner since 2008 R2) and a client running Windows 7 Pro running Office 2013.  The DNS namespace for its Active Directory forest is JOG.LOCAL.  Lab 2 is my “modern” environment and consists of servers running Windows Server 2016 and a Windows 10 client running Office 2016 .  It uses a DNS namespace of GEEKINTHEWEEDS.COM for its Active Directory forest and is synchronized with the Azure AD tenant I mentioned above.  AD FS provides SSO to Office 365 for Geek in The Weeds users.

For AD RMS configuration, both environments will initially use Cryptographic Mode 1 and will have a trusted user domain (TUD).  SQL Server Express will host the AD RMS database and I will store the cluster key locally within the database.  The use of a TUD will make the configuration a bit more interesting for reasons you’ll see in a future post.

Got all that?

In my next post I’ll cover how the architecture changes when migrating from AD RMS to Azure Information Protection.

AWS and Microsoft’s Cloud App Security

AWS and Microsoft’s Cloud App Security

It seems like it’s become a weekly occurrence to have sensitive data exposed due to poorly managed cloud services.  Due to Amazon’s large market share with Amazon Web Services (AWS) many of these instances involve publicly-accessible Simple Storage Service (S3) buckets.  In the last six months alone there were highly publicized incidents with FedEx and Verizon.  While the cloud can be empowering, it can also be very dangerous when there is a lack of governance, visibility, and acceptance of the different security mindset cloud requires.

Organizations that have been in operation for many years have grown to be very reliant on the network boundary acting as the primary security boundary.  As these organizations begin to move to a software defined data center model this traditional boundary quickly becomes less and less effective.  Unfortunately for these organizations this, in combination with a lack of sufficient understanding of cloud, gives rise to mistakes like sensitive data being exposed.

One way in which an organization can protect itself is to leverage technologies such as cloud access security brokers (cloud security gateways if you’re Forrester reader) to help monitor and control data as it travels between on-premises and the cloud.  If you’re unfamiliar with the concept of a CASB, I covered it in a previous entry and included a link to an article which provides a great overview.

Microsoft has its own CASB offering called Microsoft Cloud App Security (CAS).  It’s offered as part of Microsoft’s Enterprise Mobility and Security (EMS) E5/A5 subscription.  Over the past several months multiple connectors to third party software-as-a-service (SaaS) providers have been introduced, including one for AWS.  The capabilities with AWS are limited at this point to pulling administrative logs and user information but it shows promise.

As per usual, Microsoft provides an integration guide which is detailed in button pushing, but not so much in concepts and technical details as to what is happening behind the scenes.  Since the Azure AD and AWS blog series has attracted so many views, I thought it would be fun and informative to do an entry for how Cloud App Security can be used with AWS.

I’m not in the habit of re-creating documentation so I’ll be referencing the Microsoft integration guide throughout the post.

The first thing that needs doing is the creation of a security principal in AWS Identity and Access Management (AWS IAM) that will be used by your tenant’s instance of CAS to connect to resources in your AWS account.   The first four steps are straightforward but step 5 could a bit of an explanation.

awscas1.pngHere we’re creating a custom IAM policy for the security principal granting it a number of permissions within the AWS account.  IAM policies are sets of permissions which are attached to a human or non-human identity or AWS resource and are evaluated when a call to the resource is made.  In the traditional on-premises world, you can think of it as something somewhat similar to a set of NTFS file permissions.  When the policy pictured above is created the security principal is granted a set of permissions across all instances of CloudTrail, CloudWatch, and IAM within the account.

If you’re unfamiliar with AWS services, CloudTrail is a service which audits the API calls made to AWS resources.  Some of the information included in the events include the action taken, the resource the action was taken upon, the security principal that made the action, the date time, and source IP address of the security principal who performed the action.  The CloudWatch service allows for monitoring of metrics and optionally triggering events based upon metrics reaching specific thresholds.  The IAM service is AWS’s identity store for the cloud management layer.

Now that we have a basic understanding of the services, let’s look at the permissions Microsoft is requiring for CAS to do its thing.  The CloudTrail permissions of DescribeTrails, LookupEvents, and GetTrailStatus allow CAS to query for all trails enabled on an AWS account (CloudTrail is enabled by default on all AWS resources), lookup events in a trail, and get information about the trail such as start and stop logging times.  The CloudWatch permissions of Describe* and Get* are fancy ways of asking for  READ permissions on CloudWatch resources.  These permissions include describe-alarms-history, describe alarms, describe-alarms-for-metric, get-dashboard, and get-metric-statistics.  The IAM permissions are similar to what’s being asked for in CloudWatch, basically asking for full read.

Step number 11 instructs us to create a new CloudTrail trail.  AWS by default audits all events across all resources and stores them for 90 days.  Trails enable you to direct events captured by CloudTrail to an S3 bucket for archiving, analysis, and responding to events.

awscas2.png

The trail created is consumed by CAS to read the information captured via CloudTrail.  The permissions requested above become a bit more clear now that we see CAS is requesting read access for all trails across an account for monitoring goodness.  I’m unclear as to why CAS is asking for read for CloudWatch alarms unless it has some integration in that it monitors and reports on alarms configured for an AWS account.  The IAM read permissions are required so it can pull user  information it can use for the User Groups capability.

After the security principal is created and a sample trail is setup, it’s time to configure the connector for CAS.  Steps 12 – 15 walk through the process.  When it is complete AWS now shows as a connected app.

awscas3.png

After a few hours data will start to trickle in from AWS.  Navigating to the Users and Accounts section shows all of the accounts found in the IAM instance for my AWS account.  Envision this as your one stop shop for identifying all of the user accounts across your many cloud services.  A single pane of glass to identity across SaaS.

awscas4.png

On the Activity Log I see all of the API activities captured by CloudTrail.  If I wanted to capture more audit information, I can enable CloudTrail for the relevant resource and point it to the trail I configured for CAS.  I haven’t tested what CAS does with multiple trails, but based upon the permissions we configured when we setup the security principal, it should technically be able to pull from any trail we create.

awscas7.png

Since the CAS and AWS integration is limited to pulling logging information, lets walk through an example of how we could use the data.  Take an example where an organization has a policy that the AWS root user should not be used for administrative activities due to the level of access the account gets by default.  The organization creates AWS IAM users accounts for each of its administrators who administer the cloud management layer.  In this scenario we can create a new policy in CAS to detect and alert on instances where the AWS root user is used.

First we navigate to the Policies page under the Control section of CAS.

awscas6.png

On the Policies page we’re going to choose to create a new policy settings in the image below.  We’ll designate this as a high severity privileged account alert.  We’re interested in knowing anytime the account is used so we choose the Single Activity option.

awscas7

We’ll pretend we were smart users of CAS and let it collect data for a few weeks to get a sampling of the types of events which are captured and to give us some data to analyze.  We also went the extra mile and leveraged the ability of CAS to pull in user information from AWS IAM such that we can choose the appropriate users from the drop-down menus.

Since this is a demo and my AWS lab has a lot of activity by the root account we’re going to limit our alerts to the creation of new AWS IAM users.  To do that we set our filter to look for an Activity type equal to Create user.  Our main goal is to capture usage of the root account so we add another filter rule that searches for a User with the name equal to aws root user where it is the actor in an event.

awscas8.png

Finally we configure the alert to send an email to the administrator when the event occurs.  The governance capabilities don’t come into play in this use case.

awscas9

Next we jump back to AWS and create a new AWS IAM user named testuser1.  A few minutes after the user is created we see the event appearing in CloudTrail.

awscas10

After a few minutes, CAS generates and alert and I receive an email seen in the image below.   I’m given information as to the activity, the app, the date and time it was performed, and the client’s IP address.

awscas11.png

If I bounce back to CAS I see one new Alert.  Navigating to the alert I’m able to dismiss it, adjust the policy that generated it, or resolve it and add some notes to the resolution.

awscas12.png

I also have the option to dig deeper to see some of the patterns of the user’s behavior or the pattern of the behaviors from a specific IP address as seen below.

awscas13.png

awscas14.png

All this information is great, but what can we do with it?  In this example, it delivers visibility into the administrative activities occurring at the AWS cloud management layer by centralizing the data into a single repository which I can then send other data such as O365 activity, Box, SalesForces, etc.  By centralizing the information I can begin doing some behavioral analytics to develop standard patterns of behavior of my user base.  Understanding standard behavior patterns is key to being ahead of the bad guys whether they be insiders or outsiders.  I can search for deviations from standard patterns to detect a threat before it becomes too widespread.  I can also be proactive about putting alerts and enforcement (available for other app connectors in CAS but not AWS at this time) to stop the behavior before the threat is realized.  If I supplemented this data with log information from my on-premises proxy via Cloud App Discovery, I get an even larger sampling improving the quality of the data as well as giving me insight into shadow IT.  Pulling those “shadow” cloud solutions into the light allow me to ensure the usage of the services complies with organizational policies and opens up the opportunity of reducing costs by eliminating redundant services.

Microsoft categorizes the capabilities that help realize these benefits as the Discover and Investigate capabilities of CAS. The solution also offers a growing number of enforcement mechanisms (Microsoft categorized these enforcement mechanisms as Control) which add a whole other layer of power behind the solution.  Due to the limited integration with AWS I can’t demo those capabilities with this post.  I’ll cover those in a future post.

I hope this post helped you better understand the value that CASB/CSGs like Microsoft’s Cloud App Security can bring to the table.  While the product is still very new and a bit sparse on support with 3rd party applications, the list is growing every day. I predict the capabilities provided by technology such as Microsoft’s Cloud App Security will be as standard to IT as a firewall in the years to come.  If you’re already in Office 365 you should be ensuring you integrate these capabilities into your arsenal to understand the value they can bring to your business.

Thanks and have a wonderful week!

pfsense + squid + Kerberos

pfsense + squid + Kerberos

Hi everyone,

I hope all of you had an enjoyable holiday. I spent my week off from work spending time with the family and catching up on some reading. One area I decided to spend some time reading up on is Microsoft’s Cloud App Security. For those unfamiliar with the solution, it’s Microsoft’s entry into the cloud access security broker (CASB) (or Cloud Security Gateway (CSG) if you’re a Forrester reader) market. If you haven’t heard of “CASB” or “CSG”, don’t worry too much. While the terminology is new, many of the collection of technologies encompassing a typical CASB or CSG are not new, simply used together in new and creative ways. For a quick intro, take a read through this article and follow up with some Forrester and Gartner research for a deeper dive.

Since I haven’t had much experience with a product specifically marketing itself as a CASB, I thought it would be a great opportunity to play around with Microsoft’s solution. A good first step for any organization to grasp the value of a CASB is to explore what’s happening within the organization outside the view of IT, or as the marketers love to call it, shadow IT. The ease of consuming cloud technologies such as software as a service (SaaS) applications has been both a blessing and a curse. The new technology has been wonderful in cutting IT costs, bringing the technology closer to the business, providing for shorter time to market for new features, and providing simpler integration paths for different applications and services. On the negative side, the ease of use of these solutions means an average employee is using far more of them than is officially sanctioned by IT. This can lead to issues like loss of critical data, non-compliance with policy, or multiple business groups within an organization subscribing to the same service resulting in redundant licensing costs.

Wouldn’t it be great to get visibility into that shadow IT? Since a majority of cloud solutions work over standard HTTP(S) the services are readily accessible to the user without the user having to request additional ports be opened on the firewall. This means it’s much more challenging to track who is using what and what they’re doing with those services. Many organizations attempt to control these types of solutions with a traditional forward web proxy. However, too much focus is put on blocking the “bad” sites instead of analyzing the overall patterns of usage of services. Microsoft’s Azure AD Cloud Discovery is a feature of Azure Active Directory that can be used in conjunction with Cloud App Security’s catalog of app to provide visibility into what’s being accessed as well as providing information as to the risks the services being accessed present to the organization.

To simulate a typical medium to large organization and get some good testing done with Cloud App Discovery, I’m going to add a forward web proxy to my home lab. As I’ve mentioned in previous blog entries I have a small form factor computer running pfsense which I use as my lab networking security appliance. Out of the box, it supports a base install of Squid which can be added and configured to act as a forward web proxy with minimal effort. It gets a bit more challenging when you want to add authentication to the proxy because the built-in options for the pfsense implementation are limited to local, LDAP, and RADIUS authentication. I want authentication so I can identify users connecting to the proxy and associate the web connections with specific users but I want to use Kerberos so I get that seamless single sign on experience.

Like many open source products, the documentation on how to setup Squid running on pfsense and using Kerberos authentication is pretty terrible. Searching the all-powerful Google presents lots of forum posts with people asking how to do it, pieces of answers that don’t make much sense, and some Wikis on how to configure Squid to use Kerberos on a standard server. Given the lack of good documentation, I thought it would be fun to work my way through it and compile a walkthrough. I’m issuing the standard disclaimer that this is intended for lab purposes only. If you’re trying to deploy pfsense and Squid in a production environment, do more reading and spend time doing it safely and securely.

I won’t be covering the basic setup of pfsense as there are plenty of guides out there and the process is simple for anyone with any experience in the network appliance realm. For this demonstration I’ll be running a box with pfsense 2.4.2 installed.

On to the walkthrough!

The first step in the process is to add the Squid package through the pfsense package manager UI.

squid1

On the Package Manager screen, select the Available Packages section and install the Squid package.  After the installation is complete, you’ll see Squid shown in the Installed Packages section.

squid2

Notice the package installed is a branch of the 3.5 release while the latest release available directly via Squid is 4.0.  It’s always fun to have the latest and greatest, but pfsense is an all-in-one solution so it comes with some sacrifices.  Let’s get some of the basic configuration settings done with.  Go to the Services menu, select the Squid Proxy Server menu item, and go the General section.  First up choose the interface you want Squid to be available for and specify a port for it to listen on.

squid3.png

Now check off the Allow Users on Interface unless you have a reason to limit it to certain subnets attached to the interface.  Additionally I’d recommend checking the Resolve DNS IPv4 First option.  I banged my head against the wall with a ton of issues with Squid when I turned on authentication and this option wasn’t set.  You can thank me for saving you hours of Google and trying other options.

squid4

Setup basic logging with the settings below.

squid5.png

Basic settings are complete and it’s a good time to test the proxy from a client machine to verify its base functionality.  You can do this by directing one of your client machines to use the proxy and attempting to access a website.

After you have verified functionality you’ll need to add support for SSH to the pfsense box since we’ll need to make some changes via the command shell.  For that you’ll want to navigate the System menu, select the Advanced menu item, and go to the Admin Access section.  Scroll down from there to the Secure Shell section and click the checkbox for Enable Secure Shell and set a the SSH port to the port of your choice.  I chose 50,000.

squid6

The Secure Shell Server is active, but the firewall blocks access to it across all interfaces by default.  You now need to create the appropriate firewall rule to allow access from devices behind the interface you wish to use to SSH to the box.  For me this is the interface that my lab devices connect to.  For this you’ll select Firewall from the top menu, select the Rules menu item, and select the appropriate interface from the menu items.  Once there, click the Add button to create a firewall rule allowing devices within the subnet to hit the router interface over the port you configured earlier as seen below.  The SSH listener will now be running and will be accessible from the designated interface.

squid7.png

Now you must configure DNS such that the pfsense box can resolve the Active Directory DNS namespace to perform Kerberos related activities.  You can go the easy route and make the Active Directory domain controller the primary DNS server for pfsense via the GUI.  However, I use pfsense as the primary DNS resolver for the lab environment and forward queries to Google’s DNS servers at 8.8.8.8.

In order to continue using with my preferred configuration, I needed to take a few additional steps.  First I needed to add a Domain Override to the DNS Resolver service on pfsense to ensure it doesn’t pass the query along to the external DNS server.  I did this by selecting Services from the main menu, selecting the DNS Resolver menu item, and going to the General Settings section.  I then scrolled down to the Domain Overrides section and added the appropriate override for my Active Directory DNS namespace as seen below.  Take note that you can’t go modifying the resolv.conf as you would in a normal Linux distro since pfsense will scrub any changes you make to the file each time it restarts its services.  Get used to this behavior, we’re going to see it a number of times through this blog entry and we’ll have to learn to work around that limitation (feature?).

squid8.png

Next up you’ll want to verify name resolution is working as intended and it can be tested by running a query from the pfsense box.  Go to the Diagnostics on the main menu, select the DNS Lookup item, and type in the hostname representing the Active Directory DNS namespace.  It should resolve to the entries representing domain controllers in your Active Directory domain.  Successful testing makes the DNS configuration complete.

squid9.png

On to the guts of the configuration.  Pfsense comes with the krb5 package installed so all you need to do is configure it.  For that you are going to need to access the command shell.  Open up your favorite SSH client and connect to the pfsense box as an administrative user.  Upon successful login you’ll see the menu below.

squid10.pngYou want to hit the command shell so choose option 8 and you will be dropped into the shell.

squid11.png

The first step is to configure the krb5 package to integrate with the Active Directory domain.  For that you’ll need to create a krb5.conf file.  Create a new a krb5.conf file in the /etc/ directory and populate it with the appropriate information.  I’ve included the content of my krb5.conf file as an example.

[libdefaults]
default_realm = JOURNEYOFTHEGEEK.LOCAL
dns_lookup_realm = false
dns_lookup_kdc = true
default_tgs_enctypes = aes128-cts-hmac-sha1-96
default_tkt_enctypes = aes128-cts-hmac-sha1-96
permitted_enctypes = aes128-cts-hmac-sha1-96

[realms]
JOURNEYOFTHEGEEK.LOCAL = {
kdc = jog-dc.journeyofthegeek.local
}

[domain_realm]
.journeyofthegeek.local = JOURNEYOFTHEGEEK.LOCAL
journeyofthegeek.local = JOURNEYOFTHEGEEK.LOCAL

[logging]
kdc = FILE:/var/log/kdc.log
Default = FILE:/var/log/krb5lib.log

Check out the MIT documentation on the options available to you in the krb5.conf. I made the choice to limit the encryption algorithms to AES128 for simplicity purposes, feel free to use something else if you wish. Once the settings are populated the file can be saved.

It’s time to test the Kerberos configuration.  You do that by using running kinit and authenticating as a valid user in the Active Directory domain.  If the configuration is correct klist will display the a valid Kerberos ticket granting ticket (TGT) for the user.

squid12.png

The system is now configured to interact with the Active Directory domain using Kerberos.  You now need to create a security principal in Active Directory to represent the Squid service.  Create a new user in Active Directory and name it whatever you wish, I used svc_squid for this lab.  Since I chose to use AES128, I had to select the account control option on the user account in Active Directory Users and Computers (ADUC) that the service supported AES128.  You can ignore that step if you chose not to force an encryption level.  Now a service principal name (SPN) for the service is needed to identify the service when a user attempts to authenticate to it.  For that you’ll need to open an elevated command prompt and use the setspn command.

squid13.png

Wonderful you have a security principal created and it includes the appropriate identifier.  You now need to create a keytab that the service can use to authenticate to Active Directory.  In comes ktpass.  From the same elevated command prompt run the command as seen below.

squid14.png

Pay attention to case sensitivity because it matters when we’re talking MIT Kerberos, which is Kerberos implementation pfsense is using.  The link I included above will explain the options.  I set the crypto option to AES128 to ensure the keytab aligns with the other options I’ve configured around encryption.

Next up you need to transfer the keytab to the pfsense box.  I used WinSCP to transfer the keytab to the pfsense box to the /usr/local/etc/squid/ directory.   The keytab is on the pfsense box but you need to tell Squid where the keytab is.  In a typical Squid implementation you’d define variable in the Squid startup script which would be consumed by the authentication helper.  However, this is another case where pfsense will overwrite any changes you make to the startup script.

In addition to being unable to modify the startup script to set, pfsense also overwrites any changes you make directly to the squid.conf file.  To get around this you’ll need to add the configuration options to the config file through the pfsense GUI.  From within the GUI go to the Services section of the main menu, select the Squid Proxy Server menu item, go to the General section, scroll down and hit the Advanced Options button and scroll to the Advanced Features section.  In the Custom Options (Before Auth) field, you’ll want to add the lines below.

squid15.png

The first four lines I’ve added here are called directives in Squid.  The first directive instructs Squid to use the negotiate_kerberos_auth authentication helper.  The options I’ve added to the helper set a few different configuration options for the helper.  The -k option allows me to direct Squid to the keytab file I added to the server which I couldn’t do with a variable in the startup script.  The -d option writes debug information for the helper to Squid cache.log and the -t option shuts off the replay cache for MIT Kerberos.   The second directive sets the child authentication processes to 1,000.  You’ll want to do some research on this directive if you’re moving this into a production environment.  I simply choose 1000 so I wouldn’t run any risk of getting my authentication requests queued for the purposes of this lab.  The third directive is set to on by default and should only be set to off if you run into issues with PUT/POST requests.

The fourth directive starts enforcing access controls within Squid.  Access controls within Squid are a bit weird.  The Squid wiki does a decent job of explaining how they work.  The short of what I’ve done in the fourth directive is create an access list called auth which will contain all users who successfully authenticate against Squid.  The next line denies users access to the http_access list if the user doesn’t below to the auth access line (blocking non-authenticated users).  The final line allows users who are in the auth list into the http_access list (allows authenticated users).

With that last amount of configuration, you’ve gotten pfsense and Squid configured for Kerberos authentication.  I’ll quickly demonstrate the what a successful implementation looks like.  For that I’m going to bounce over to a Windows 10 domain-joined machine with Chrome installed and configured to use the proxy server.  Navigating to Amazon displays the webpage with no authentication prompts and running a klist from a command prompt shows I have a Kerberos ticket for the proxy.

squid16.png

Going back into the pfsense GUI, going to the Services menu, selecting the Squid Proxy Server menu item and navigating to the Real Time section shows the access log displaying Rick Sanchez accessing Amazon and successful consumption of the Kerberos ticket in the Cache Log section.

squid17.png

In a future post I’ll dig a bit deeper into Azure AD Cloud Discovery and setup automatic forwarding of logs using the Microsoft collector.

Have a happy New Year!