Passing the AZ-300

Hello all!

Over the past year I’ve been buried in Amazon Web Services (AWS), learning the platform, and working through the certification paths.  As part of my new role at Microsoft, I’ve been given the opportunity to pursue the Microsoft Certified: Azure Solutions Architect Expert.  In the world of multi-cloud who doesn’t want to learn multiple platforms? ūüôā

The Microsoft Certified: Azure Solutions Architect Expert certification is part of Microsoft’s new set of certifications.  If you’re already familiar with the AWS Certification track, the new Microsoft track is very similar in that it has three paths.  These paths are Developer, Administrator, and Architect.  Each path consists of two exams, again similar to AWS’s structure of Associate and Professional.

Even though the paths are similar the focus and structure of the first tier of exams for the Microsoft exams differ greatly from the AWS Associate exams.  The AWS exams are primarily multiple choice while the Microsoft first level of exams consists of multiple choice, drag and drop, fill in the blank, case studies, and emulated labs.  Another difference between the two is the AWS exams focus greatly on how the products work and when and where to use each product.  The Microsoft first level exams focus on those topics too, but additionally test your ability to implement the technologies.

When I started studying for the AZ-300 – Microsoft Azure Architect Technologies two weeks I had a difficult time finding good study materials because the exam is so new and has changed a few times since Microsoft released it last year.  Google searches brought up a lot of illegitimate study materials (brain dumps) but not much in the way of helpful materials beyond the official Azure documentation.  After passing the exam this week, I wanted to give back to the community and provide some tips, links, and the study guide I put together to help prepare for the exam.

To prepare for an exam I have a standard routine.

  1. I first start with referencing the official exam requirements.
  2. From there, I take one or two on-demand training classes.  I watch each lesson in a module at 1.2x speed (1x always seems to slow which I think is largely due to living in Boston where we tend to talk very quickly).  I then go back through each module at 1.5x to 2.0x taking notes on paper.  I then type up the notes and organize them into topics.
  3. Once I’m done with the training I’ll usually dive deep into the official documentation on the subjects I’m weak on or that I find interesting.
  4. During the entirety of the learning process I will build out labs to get a feel for implementation and operation of the products.
  5. I wrap it up by adding the additional learnings from the public documentation and labs into my digital notes.  I then pull out the key concepts from the digital notes and write up flash cards to study.
  6. Practice makes perfect and for that I will leverage legitimate practice exams (braindumps make the entire exercise a pointless waste of time and degrade the value of the certification) like those offered from MeasureUp.

Yes, I’m a bit nuts about my studying process but I can assure you it works and you will really learn the content and not just memorize it.

From a baseline perspective, my experience with Microsoft’s cloud services were primarily in Azure Active Directory and Azure Information Protection.  For Azure I had built some virtual networks with virtual machines in the past, but nothing more than that.  I have a pretty solid foundation in AWS and cloud architectural patterns which definitely came in handy since the base offerings of each of the cloud providers are fairly similar.

For on-demand training A Cloud Guru has always been my go to.  Unfortunately, their Azure training options aren’t as robust as the AWS offerings, but Nick Coyler’s AZ-300 course is solid.  It CANNOT be your sole source of material but as with most training from the site, it will give you the 10,000 ft view.  Once I finished with A Cloud Guru, I moved on to UdemyScott Duffy’s AZ-300 course does not have close to the detail of Nick’s course, but provides a lot more hands-on activities that will get you working with the platform via the GUI and the CLI.  Add both courses together and you’ll cover a good chunk of the exam.

The courses themselves are not sufficient to pass the exam.  They will give you the framework, but docs.microsoft.com is your best friend.  There is the risk you can dive more deep into the product than you need to, but reference back to the exam outline to keep yourself honest.  Hell, worst case scenario is you learn more than you need to learn. ūüôā  Gregor Suttie put together a wonderful course outline with links to the official documentation that will help you target key areas of the public documentation.

Perhaps most importantly, you need to lab.  Then lab again.  Lab once more, and then another time.  Run through the Quickstarts and Tutorials on docs.microsoft.com.  Get your hands dirty with the CLI, PowerShell, and the Portal.  You don’t have to be an expert, but you’ll want to understand the basics and the general syntax of both the CLI and PowerShell.  You will have fully interactive labs where you’ll need to implement the products given a set of requirements.

Finally, I’ve added the study guides I put together to my github.  I make no guarantees that the data is up to date or even that there aren’t mistakes in some of the content.  Use it as an artifact to supplement your studies as you prepare your own study guide.

Summing it up, don’t just look at the exam as a piece of virtual paper.  Look at it as an opportunity to learn and grow your skill set.  Take the time to not just memorize, but understand and apply what you learn.  Be thankful you work an industry where things change and provides you with the opportunity to learn something new and exercise that big brain of yours.

I wish you the best of luck in your studies and if you have additional materials or a website you’ve found helpful, please comment below.

Thanks!

 

 

 

 

 

 

 

Some updates…

Hello folks!  Life has been busy with some wonderful work and some big career changes.  As some of you may know, I moved on from my role at the Federal Reserve last summer.  While I loved the job, the people, and the organization, I wanted to try something new and different.

I was lucky enough to have the opportunity to work for one of the big three cloud providers in a security-focused professional services role supporting public sector customers.¬† The role was amazing, I learned a TON about a cloud platform I had never worked with, interacted with some of the smartest people I’ve ever met, and had the chance to help architect and implement some really awesome environments for some stellar customers.

Unfortunately the travel started taking a toll on my personal life and family time.¬† I made the tough decision to move on and find something that was a bit more regional and less travel.¬† I struck the lottery once more and in April started as a Cloud Solution Architect at Microsoft focusing on Infrastructure and Security in Azure.¬† I’ve once again been drinking from multiple firehoses and learning my second cloud platform.¬† It’s been a ball so far and I’m extremely excited learn and contribute to Microsoft’s mission to empower every person and every organization on the planet to achieve more!

Expect a lot more activity on this blog as I share my experiences and my learnings with the wider tech community.¬† It’s going to be a fun ride!

Capturing and Visualizing Office 365 Security Logs – Part 1

Welcome back again my fellow geeks!

I’ve been busy over the past month nerding out on some pet projects.  I thought it would be fun to share one of those pet projects with you.  If you had a chance to check out my last series, I walked through my first Python experiment which was to write a re-usable tool that could be used to pull data from Microsoft’s Graph API (Microsoft Graph).

For those of you unfamiliar with Microsoft Graph, it’s the Restful API (application programming interface) that is used to interact with Microsoft cloud offerings such as Office 365 and Azure.  You’ve probably been interacting with it without even knowing it if through the many PowerShell modules Microsoft has released to programmatically interact with those services.

One of the many resources which can be accessed through Microsoft Graph are Azure AD (Active Directory) security and audit reports.  If you’re using Office 365, Microsoft Azure, or simply Azure AD as an identity platform for SSO (single sign-on) to third-party applications like SalesForce, these reports provide critical security data.  You’re going to want to capture them, store them, and analyze them.  You’re also going to have to account for the window that Microsoft makes these logs available.

The challenge is they are not available via the means logs have traditionally been captured on-premises by using syslogd, installing an SIEM agent, or even Windows Event Log Forwarding.  Instead you’ll need to take a step forward in evolving the way you’re used to doing things. This is what moving to the cloud is all about.

Microsoft allows you to download the logs manually via the Azure Portal GUI (graphical user interface) or capture them by programmatically interacting with Microsoft Graph.  While the former option may work for ad-hoc use cases, it doesn’t scale.  Instead we’ll explore the latter method.

If you have an existing enterprise-class SIEM (Security Information and Event Management) solution such as Splunk, you’ll have an out of box integration.  However, what if you don’t have such a platform, your organization isn’t yet ready to let that platform reach out over the Internet, or you’re interested in doing this for a personal Office 365 subscription?  I fell into the last category and decided it would be an excellent use case to get some experience with Python, Microsoft Graph, and take advantage of some of the data services offered by AWS (Amazon Web Services).   This is the use case and solution I’m going to cover in this post.

Last year I had a great opportunity to dig into operational and security logs to extract useful data to address some business problems.  It was my first real opportunity to examine large amounts of data and to create different visualizations of that data to extract useful trends about user and application behavior.  I enjoyed the hell out of it and thought it would be fun to experiment with my own data.

I decided that my first use case would be Office 365 security logs.  As I covered in my last series my wife’s Office 365 account was hacked.  The damage was minor as she doesn’t use the account for much beyond some crafting sites (she’s a master crocheter as you can see from the crazy awesome Pennywise The Clown she made me for Christmas).

img_4301

The first step in the process was determining an architecture for the solution.  I gave myself a few requirements:

  1. The solution must not be dependent on my home lab infrastructure
  2. Storage for the logs must be cheap and readily available
  3. The credentials used in my Python code needs to be properly secured
  4. The solution must be automated and notify me of failures
  5. The data needs to be available in a form that it can be examined with an analytics solution

Based upon the requirements I decided to go the serverless (don’t hate me for using that tech buzzword ūüôā ) route.  My decisions were:

  • AWS Lambda would run my code
  • Amazon CloudWatch Events would be used to trigger the Lambda once a day to download the last 24 hours of logs
  • Amazon S3 (Simple Storage Service) would store the logs
  • AWS Systems Manager Parameter Store would store the parameters my code used leveraging AWS KMS (Key Management Service) to encrypt the credentials used to interact with Microsoft Graph
  • Amazon Athena would hold the schema for the logs and make the data queryable via SQL
  • Amazon QuickSight would be used to visualize the data by querying Amazon Athena

The high level architecture is pictured below.

untitled

I had never done a Lambda before so I spent a few days looking at some examples and doing the typical Hello World that we all do when we’re learning something new.  From there I took the framework of Python code I put together for general purpose queries to the Microsoft Graph, and adapted it into two Lambdas.  One Lambda would pull Sign-In logs while the other would pull Audit Logs.  I also wanted a repeatable way to provision the Lambdas to share with others and get some CloudFormation practice and brush up on my very dusty Bash scripting.   The results are located here in one of my Github repos.

I’m going to stop here for this post because we’ve covered a fair amount of material.  Hopefully after reading this post you understand that you have to take a new tact with getting logs for cloud-based services such as Azure AD.  Thankfully the cloud has brought us a whole new toolset we can use to automate the extraction and storage of those logs in a simple and secure manner.

In my next post I’ll walk through how I used Athena and QuickSight to put together some neat dashboards to satisfy my nerdy interests and get better insight into what’s happening on a daily basis with my Office 365 subscription.

See you next post and go Pats!

Using Python to Pull Data from MS Graph API – Part 1

Welcome to 2019 fellow geeks! I hope each of you had a wonderful holiday with friends and family.

It’s been a few months since my last post. As some of you may be aware I made a career move last September and took on a new role with a different organization. The first few months have been like drinking from multiple fire hoses at once and I’ve learned a ton. It’s been an amazing experience that I’m excited to continue in 2019.

One area I’ve been putting some focus in is learning the basics of Python. I’ve been a PowerShell guy (with a bit of C# thrown in there) for the past six years so diving into a new language was a welcome change. I picked up a few books on the language, watched a few videos, and it wasn’t clicking. At that point I decided it was time to jump into the deep end and come up with a use case to build out a script for. Thankfully I had one queued up that I had started in PowerShell.

Early last year my wife’s Office 365 account was hacked. Thankfully no real damage was done minus some spam email that was sent out. I went through the wonderful process of changing her passwords across her accounts, improving the complexity and length, getting her on-boarded with a password management service, and enabling Azure MFA (Multi-factor Authentication) on her Office 365 account and any additional services she was using that supported MFA options.¬† It was not fun.

Curious of what the logs would have shown, I had begun putting together a PowerShell script that was going to pull down the logs from Azure AD (Active Directory), extract the relevant data, and export it CSV (comma-separate values) where I could play around with it in whatever analytics tool I could get my hands on. Unfortunately life happened and I never had a chance to finish the script or play with the data. This would be my use case for my first Python script.

Azure AD offers a few different types of logs which Microsoft divides into a security pillar and an activity pillar. For my use case I was interested in looking at the reports in the Activity pillar, specifically the Sign-ins report. This report is available for tenants with an Azure AD Premium P1 or P2 subscription (I added P2 subscriptions to our family accounts last year).  The sign-in logs have a retention period of 30 days and are available either through the Azure Portal or programmatically through the MS Graph API (Application Programming Interface).

My primary goals were to create as much reusable code as possible and experiment with as many APIs/SDKs (Software Development Kits) as I could.¬† This was accomplished by breaking the code into various reusable modules and leveraging AWS (Amazon Web Services) services for secure storage of Azure AD application credentials and cloud-based storage of the exported data.¬† Going this route forced me to use the MS Graph API, Microsoft’s Azure Active Directory Library for Python (or ADAL for short), and Amazon’s Boto3 Python SDK.

On the AWS side I used AWS Systems Manager Parameter Store to store the Azure AD credentials as secure strings encrypted with a AWS KMS (Key Management Service) customer-managed customer master key (CMK).  For cloud storage of the log files I used Amazon S3.

Lastly I needed a development environment and source control.¬† For about a day I simply used Sublime Text on my Mac and saved the file to a personal cloud storage account.¬† This was obviously not a great idea so I decided to finally get my GitHub repository up and running.¬† Additionally I moved over to using AWS’s Cloud9 for my IDE (integrated development environment).¬† ¬†Cloud9 has the wonderful perk of being web based and has the capability of¬†creating temporary credentials that can do most of what my AWS IAM user can do.¬† This made it simple to handle permissions to the various resources I was using.

Once the instance of Cloud9 was spun up I needed to set the environment up for Python 3 and add the necessary libraries.¬† The AMI (Amazon Machine Image) used by the Cloud9 service to provision new instances includes both Python 2.7 and Python 3.6.¬† This fact matters when adding the ADAL and Boto3 modules via pip because if you simply run a¬†pip install module_name¬†it will be installed for Python 2.7.¬† Instead you’ll want to execute the command python3 -m pip install¬†module_name¬†which ensures that the two modules are installed in the appropriate location.

In my next post I’ll walk through and demonstrate the script.

Have a great week!

 

 

 

 

The Evolution of AD RMS to Azure Information Protection – Part 1

The Evolution of AD RMS to Azure Information Protection – Part 1

Collaboration.¬† It’s a term I hear at least a few times a day when speaking to my user base.¬† The ability to seamlessly collaborate with team members, across the organization, with trusted partners, and with customers is a must.¬† It’s a driving force between much of the evolution of software-as-a-service collaboration offerings such as Office 365.¬† While the industry is evolving to make collaboration easier than ever, it’s also introducing significant challenges for organizations to protect and control their data.

In a recent post I talked about Microsoft’s entry into the cloud access security broker (CASB)¬†market with Cloud App Security (CAS) and its capability to provide auditing and alerting on activities performed in Amazon Web Services (AWS).¬† Microsoft refers to this collection of features as the Investigate capability of CAS.¬† Before I cover an example of the Control features in action, I want to talk about the product that works behind the scenes to provide CAS with many of the Control features.

That product is Azure Information Protection (AIP) and it provides the capability to classify, label, and protect files and email.  The protection piece is provided by another Microsoft product, Azure Active Directory Rights Management Services (Azure RMS).  Beyond just encrypting a file or email, Azure RMS can control what a user can do with a file such as preventing a user from printing a document or forwarding an email.  The best part?  The protection goes with the data even when it leaves your security boundary.

For those of you that have read my blog you can see that I am a huge fanboy of the predecessor to Azure RMS, Active Directory Rights Management Services (AD RMS, previously Rights Management Service or RMS for you super nerds).¬† AD RMS has been a role available in Microsoft Windows Server since Windows Server 2003.¬† It was a product well ahead of its time that unfortunately never really caught on.¬† Given my love for AD RMS, I thought it would be really fun to do a series looking at how AIP has evolved from AD RMS.¬†¬† It’s a dramatic shift from a rather unknown product to a product that provides capabilities that will be as standard and as necessary as Antivirus was to the on-premises world.

I built a pretty robust lab environment (two actually) such that I could demonstrate the different ways the solutions work as well as demonstrate what it looks to migrate from AD RMS to AIP.¬† Given the complexity of the lab environment,¬† I’m going to take this post to cover what I put together.

The layout looks like this:

 

1AIP1.png

On the modern end I have an Azure AD tenant with the custom domain assigned of geekintheweeds.com.  Attached to the tenant I have some Office 365 E5 and Enterprise Mobility + Security E5 trial licenses  For the legacy end I have two separate labs setup in Azure each within its own resource group.  Lab number one contains three virtual machines (VMs) that run a series of services included Active Directory Domain Services (AD DS), Active Directory Certificate Services (AD CS), AD RMS, and Microsoft SQL Server Express.  Lab number two contains four VMs that run the same set as services as Lab 1 in addition to Active Directory Federation Services (AD FS) and Azure Active Directory Connect (AADC).  The virtual network (vnet) within each resource group has been peered and both resource groups contain a virtual gateway which has been configured with a site-to-site virtual private network (VPN) back to my home Hyper-V environment.  In the Hyper V environment I have two workstations.

Lab 1 is my “legacy” environment and consists of servers running Windows 2008 R2 and Windows Server 2012 R2 (AD RMS hasn’t changed in any meaningful manner since 2008 R2) and a client running Windows 7 Pro running Office 2013.¬† The DNS namespace for its Active Directory forest is JOG.LOCAL.¬† Lab 2 is my “modern” environment and consists of servers running Windows Server 2016 and a Windows 10 client running Office 2016 .¬† It uses a DNS namespace of GEEKINTHEWEEDS.COM for its Active Directory forest and is synchronized with the Azure AD tenant I mentioned above.¬† AD FS provides SSO to Office 365 for Geek in The Weeds users.

For AD RMS configuration, both environments will initially use Cryptographic Mode 1¬†and will have a trusted user domain (TUD).¬† SQL Server Express will host the AD RMS database and I will store the cluster key locally within the database.¬† The use of a TUD will make the configuration a bit more interesting for reasons you’ll see in a future post.

Got all that?

In my next post I’ll cover how the architecture changes when migrating from AD RMS to Azure Information Protection.

AWS and Microsoft’s Cloud App Security

AWS and Microsoft’s Cloud App Security

It seems like it’s become a weekly occurrence to have sensitive data exposed due to poorly managed cloud services.¬† Due to Amazon’s large market share with Amazon Web Services (AWS) many of these instances involve publicly-accessible Simple Storage Service (S3) buckets.¬† In the last six months alone there were highly publicized incidents with FedEx and Verizon.¬† While the cloud can be empowering, it can also be very dangerous when there is a lack of governance, visibility, and acceptance of the different security mindset cloud requires.

Organizations that have been in operation for many years have grown to be very reliant on the network boundary acting as the primary security boundary.  As these organizations begin to move to a software defined data center model this traditional boundary quickly becomes less and less effective.  Unfortunately for these organizations this, in combination with a lack of sufficient understanding of cloud, gives rise to mistakes like sensitive data being exposed.

One way in which an organization can protect itself is to leverage technologies such as cloud access security brokers (cloud security gateways if you’re Forrester reader) to help monitor and control data as it travels between on-premises and the cloud.¬† If you’re unfamiliar with the concept of a CASB, I covered it in a previous entry¬†and included a link to an article which provides a great overview.

Microsoft has its own CASB offering called Microsoft Cloud App Security (CAS).¬† It’s offered as part of Microsoft’s Enterprise Mobility and Security (EMS) E5/A5 subscription.¬† Over the past several months multiple connectors to third party software-as-a-service (SaaS) providers have been introduced, including one for AWS.¬† The capabilities with AWS are limited at this point to pulling administrative logs and user information but it shows promise.

As per usual, Microsoft provides an integration guide which is detailed in button pushing, but not so much in concepts and technical details as to what is happening behind the scenes.  Since the Azure AD and AWS blog series has attracted so many views, I thought it would be fun and informative to do an entry for how Cloud App Security can be used with AWS.

I’m not in the habit of re-creating documentation so I’ll be referencing the Microsoft integration guide throughout the post.

The first thing that needs doing is the creation of a security principal in AWS Identity and Access Management (AWS IAM) that will be used by your tenant’s instance of CAS to connect to resources in your AWS account.¬†¬† The first four steps are straightforward but step 5 could a bit of an explanation.

awscas1.pngHere we’re creating a custom IAM policy for the security principal granting it a number of permissions within the AWS account.¬† IAM policies are sets of permissions which are attached to a human or non-human identity or AWS resource and are evaluated when a call to the resource is made.¬† In the traditional on-premises world, you can think of it as something somewhat similar to a set of NTFS file permissions.¬† When the policy pictured above is created the security principal is granted a set of permissions across all instances of CloudTrail, CloudWatch, and IAM within the account.

If you’re unfamiliar with AWS services, CloudTrail¬†is a service which audits the API calls made to AWS resources.¬† Some of the information included in the events include the action taken, the resource the action was taken upon, the security principal that made the action, the date time, and source IP address of the security principal who performed the action.¬† The CloudWatch service allows for monitoring of metrics and optionally triggering events based upon metrics reaching specific thresholds.¬† The IAM¬†service is AWS’s identity store for the cloud management layer.

Now that we have a basic understanding of the services, let’s look at the permissions Microsoft is requiring for CAS to do its thing.¬† The CloudTrail permissions of DescribeTrails, LookupEvents, and GetTrailStatus allow CAS to query for all trails enabled on an AWS account (CloudTrail is enabled by default on all AWS resources), lookup events in a trail, and get information about the trail such as start and stop logging times.¬† The CloudWatch permissions of Describe* and Get* are fancy ways of asking for¬† READ permissions on CloudWatch resources.¬† These permissions include describe-alarms-history, describe alarms, describe-alarms-for-metric, get-dashboard, and get-metric-statistics.¬† The IAM permissions are similar to what’s being asked for in CloudWatch, basically asking for full read.

Step number 11 instructs us to create a new CloudTrail trail.  AWS by default audits all events across all resources and stores them for 90 days.  Trails enable you to direct events captured by CloudTrail to an S3 bucket for archiving, analysis, and responding to events.

awscas2.png

The trail created is consumed by CAS to read the information captured via CloudTrail.¬† The permissions requested above become a bit more clear now that we see CAS is requesting read access for all trails across an account for monitoring goodness.¬† I’m unclear as to why CAS is asking for read for CloudWatch alarms unless it has some integration in that it monitors and reports on alarms configured for an AWS account.¬† The IAM read permissions are required so it can pull user¬† information it can use for the User Groups capability.

After the security principal is created and a sample trail is setup, it’s time to configure the connector for CAS.¬† Steps 12 – 15 walk through the process.¬† When it is complete AWS now shows as a connected app.

awscas3.png

After a few hours data will start to trickle in from AWS.  Navigating to the Users and Accounts section shows all of the accounts found in the IAM instance for my AWS account.  Envision this as your one stop shop for identifying all of the user accounts across your many cloud services.  A single pane of glass to identity across SaaS.

awscas4.png

On the Activity Log I see all of the API activities captured by CloudTrail.¬† If I wanted to capture more audit information, I can enable CloudTrail for the relevant resource and point it to the trail I configured for CAS.¬† I haven’t tested what CAS does with multiple trails, but based upon the permissions we configured when we setup the security principal, it should technically be able to pull from any trail we create.

awscas7.png

Since the CAS and AWS integration is limited to pulling logging information, lets walk through an example of how we could use the data.  Take an example where an organization has a policy that the AWS root user should not be used for administrative activities due to the level of access the account gets by default.  The organization creates AWS IAM users accounts for each of its administrators who administer the cloud management layer.  In this scenario we can create a new policy in CAS to detect and alert on instances where the AWS root user is used.

First we navigate to the Policies page under the Control section of CAS.

awscas6.png

On the Policies page we’re going to choose to create a new policy settings in the image below.¬† We’ll designate this as a high severity privileged account alert.¬† We’re interested in knowing anytime the account is used so we choose the Single Activity option.

awscas7

We’ll pretend we were smart users of CAS and let it collect data for a few weeks to get a sampling of the types of events which are captured and to give us some data to analyze.¬† We also went the extra mile and leveraged the ability of CAS to pull in user information from AWS IAM such that we can choose the appropriate users from the drop-down menus.

Since this is a demo and my AWS lab has a lot of activity by the root account we’re going to limit our alerts to the creation of new AWS IAM users.¬† To do that we set our filter to look for an Activity type equal to Create user.¬† Our main goal is to capture usage of the root account so we add another filter rule that searches for a User with the name equal to aws root user where it is the actor in an event.

awscas8.png

Finally we configure the alert to send an email to the administrator when the event occurs.¬† The governance capabilities don’t come into play in this use case.

awscas9

Next we jump back to AWS and create a new AWS IAM user named testuser1.  A few minutes after the user is created we see the event appearing in CloudTrail.

awscas10

After a few minutes, CAS generates and alert and I receive an email seen in the image below.¬†¬† I’m given information as to the activity, the app, the date and time it was performed, and the client’s IP address.

awscas11.png

If I bounce back to CAS I see one new Alert.¬† Navigating to the alert I’m able to dismiss it, adjust the policy that generated it, or resolve it and add some notes to the resolution.

awscas12.png

I also have the option to dig deeper to see some of the patterns of the user’s behavior or the pattern of the behaviors from a specific IP address as seen below.

awscas13.png

awscas14.png

All this information is great, but what can we do with it?¬† In this example, it delivers visibility into the administrative activities occurring at the AWS cloud management layer by centralizing the data into a single repository which I can then send other data such as O365 activity, Box, SalesForces, etc.¬† By centralizing the information I can begin doing some behavioral analytics to develop standard patterns of behavior of my user base.¬† Understanding standard behavior patterns is key to being ahead of the bad guys whether they be insiders or outsiders.¬† I can search for deviations from standard patterns to detect a threat before it becomes too widespread.¬† I can also be proactive about putting alerts and enforcement (available for other app connectors in CAS but not AWS at this time) to stop the behavior before the threat is realized.¬† If I supplemented this data with log information from my on-premises proxy via Cloud App Discovery, I get an even larger sampling improving the quality of the data as well as giving me insight into shadow IT.¬† Pulling those “shadow” cloud solutions into the light allow me to ensure the usage of the services complies with organizational policies and opens up the opportunity of reducing costs by eliminating redundant services.

Microsoft categorizes the capabilities that help realize these benefits as the Discover and Investigate capabilities of CAS. The solution also offers a growing number of enforcement mechanisms (Microsoft categorized these enforcement mechanisms as Control) which add a whole other layer of power behind the solution.¬† Due to the limited integration with AWS I can’t demo those capabilities with this post.¬† I’ll cover those in a future post.

I hope this post helped you better understand the value that CASB/CSGs like Microsoft’s Cloud App Security can bring to the table.¬† While the product is still very new and a bit sparse on support with 3rd party applications, the list is growing every day. I predict the capabilities provided by technology such as Microsoft’s Cloud App Security will be as standard to IT as a firewall in the years to come.¬† If you’re already in Office 365 you should be ensuring you integrate these capabilities into your arsenal to understand the value they can bring to your business.

Thanks and have a wonderful week!

Deep Dive into Azure AD Domain Services – Part 3

Deep Dive into Azure AD Domain Services  – Part 3

Well folks, it’s time to wrap up this series on Azure Active Directory Domain Services (AAD DS).¬† In my first post I covered the basic configurations of the managed domain and in my second post took a look at how well Microsoft did in applying security best practices and complying with NIST standards.¬† In this post I’m going to briefly cover the LDAPS over the Internet capability, summarize some key findings, and list out some improvements I’d like to see made to the service.

One of the odd features Microsoft provides with the AAD DS service is the ability to expose the managed domain over LDAPS to the Internet.¬† I really am lost as to the use case that drove the feature.¬† LDAP is very much a legacy on-premises protocol that has no place being exposed to risks of the public Internet.¬† It’s the last thing that should the industry should be encouraging.¬† Just because you can, doesn’t mean you should.¬†¬† Now let me step off the soap box and let’s take a look at the feature.

As I covered in my last post LDAPS is not natively enabled in the managed domain.  The feature must be configured and enabled through the Azure Portal.  The configuration consists of uploading the private key and certificate the service will use in the form of a PKCS12 file (*.PFX).  The certificate has a few requirements that are outlined in the instructions above.  After the certificate is validated, it takes about 10-15 minutes for the service to become available.  Beyond enabling the service within the VNet, you additionally have the option to expose the LDAPS endpoint to the Internet.

3aads1.png

Microsoft provides instructions on how to restrict access to the endpoint to trusted IPs via a network security group (NSG) because yeah, exposing an LDAP endpoint to the Internet is just a tad risky.¬† To lock it down you simply associate an NSG with the subnet AAD DS is serving.¬† Once that is done enable the service via the option in the image above and wait about 10 minutes.¬† After the service is up, register a external DNS record for the service that points to the IP address noted under the properties section of the AADS blade and you’re good to go.

For my testing purposes, I locked the external LDAPS endpoint down to the public IP address my Azure VM was SNATed to.  After that I created an entry in the host file of the VM that matched the external DNS name I gave the service (whyldap.geekintheweeds.com) to the public IP address of the LDAPS endpoint in order to bypass the split-brain DNS challenge.  Initiating a connection from LDP.EXE was a success.

3aads2.png

Now that we know the service is running, let’s check out what the protocol support and cipher suite looks like.

3aads3.png

Again we see the use of deprecated cipher suites. Here the risk is that much greater since a small mistake with an NSG could expose this endpoint directly to the Internet.¬† If you’re going to use this feature, please just don’t.¬† If you’re really determined to, don’t screw up your NSGs.

This series was probably one of the more enjoyable series I’ve done since I knew very little about the AAD DS offering. There were a few key takeaways that are worth sharing:

  • The more objects in the directory, the more expensive the service.
  • Users and groups can be created directly in managed domain after a new organizational unit is created.
  • Password and lockout policy is insanely loose to the point where I can create an account with a three character password (just need to meet complexity requirements) and accounts never lockout.¬† The policy cannot be changed.
  • RC4 encryption ciphers are enabled and cannot be disabled.
  • NTLMv1 is enabled and cannot be disabled.
  • The service does not support smart-card enforced users.¬† Yes, that includes both the users synchronized from Azure AD as well as any users you create directly in the managed domain.¬† If I had to guess, it’s probably due to the fact that you’re not a Domain Admin so hence you can’t add to the NTAuth certificate store.
  • LDAPS is not enabled by default.
  • Schema extensions are not supported.
  • Account-Based Kerberos Delegation is not supported.
  • If you are syncing identities to Azure AD, you’ll also need to synchronize your passwords.
  • The managed domain is very much “out of the box” defaults.
  • Microsoft creates a “god” account which is a permanent member of every privileged group in the forest
  • Recovery of deleted objects created directly in the managed domain is not possible.¬† The rights have not been delegated to the AADC Administrator.
  • The service does not allow for Active Directory trusts
  • SIDHistory attribute of users and groups sourced from Azure AD is populated with Primary Group from on-premises domain

My verdict on AAD DS is it’s not a very useful service in its current state.¬† Beyond small organizations, organizations that have very little to no requirements on legacy infrastructure, organizations that don’t have strong security requirements, and dev/qa purposes I don’t see much of a use for it right now.¬† It comes off as a service in its infancy that has a lot of room to grow and mature.¬† Microsoft has gone a bit too far in the standardization/simplicity direction and needs to shift a bit in the opposite direction by allowing for more customization, especially in regards to security.

I’d really like to see Microsoft introduce the capabilities below.¬† ¬†All of them should¬† exposed via the resource blade in the Azure Portal if at all possible.¬† It would provide a singular administration point (which seems to be the strategy given the move of Azure AD and Intune to the Azure Portal) and would allow Microsoft to control how the options are enabled in the managed domain.¬† This means no more administrators blowing up their Active Directory forest because they accidentally shut off all the supported cipher suites for Kerberos.

  • Expose Domain Controller Event Logs to Azure Portal/Graph API and add support for AAD DS Power BI Dashboards
  • Support for Active Directory trusts
  • Out of the box provide a Red Forest model (get rid of that “god” account)
  • Option to disable risky cipher suites for both Kerberos and LDAPS
  • Option to harden the password and lockout policy
  • Option to disable NTLMv1
  • Option to turn on LDAP Debug Logging
  • Option to direct Domain Controller event logs to a SIEM
  • Option to restore deleted users and groups that were created directly in the managed domain.¬† If you’re allow creation, you need to allow for restoration.
  • Removal of Internet-accessible LDAPS endpoint feature or at least somehow incorporate the NSG lockdown feature directly into the AAD DS blade.

While the service has a lot of room for improvement the direction of a managed Windows AD offering is spot on.¬† In the year 2018, there is no reason Windows AD shouldn’t be offered as a managed service.¬† The direction Microsoft has gone by sourcing the identities and credentials from Azure AD is especially creative.¬† It’s a solid step in the direction of creating a singular centralized identity service that provides both legacy and modern protocols.¬† I’ll be watching this service closely as Microsoft builds upon it for the next few months.

Thanks and see you next post!