Visualizing AWS Logging Data in Azure Monitor – Part 2

Visualizing AWS Logging Data in Azure Monitor – Part 2

Welcome back folks!

In this post I’ll be continuing my series on how Azure Monitor can be used to visualize log data generated by other cloud services.  In my last post I covered the challenges that multicloud brings and what Azure can do to help with it.  I also gave an overview of Azure Monitor and covered the design of the demo I put together and will be walking through in this post.  Please take a read through that post if you haven’t already.  If you want to follow along, I’ve put the solution up on Github.

Let’s quickly review the design of the solution.

Capture

This solution uses some simple Python code to pull information about the usage of AWS IAM User access id and secret keys from an AWS account.  The code runs via a Lambda and stores the Azure Log Analytics Workspace id and key in environment variables of the Lambda that are encrypted with an AWS KMS key.  The data is pulled from the AWS API using the Boto3 SDK and is transformed to JSON format.  It’s then delivered to the HTTP Data Collector API which places it into the Log Analytics Workspace.  From there, it becomes available to Azure Monitor to query and visualize.

Setting up an Azure environment for this integration is very simple.  You’ll need an active Azure subscription.  If you don’t have one, you can setup a free Azure account to play around.  Once you’re set with the Azure subscription, you’ll need to create an Azure Log Analytics Workspace.  Instructions for that can be found in this Microsoft article.  After the workspace has been setup, you’ll need to get the workspace id and key as referenced in the Obtain workspace ID and key section of this Microsoft article.  You’ll use this workspace ID and key to authenticate to the HTTP Data Collector API.

If you have a sandbox AWS account and would like to follow along, I’ve included a CloudFormation template that will setup the AWS environment.  You’ll need to have an AWS account with sufficient permissions to run the template and provision the resources.  Prior to running the template, you will need to zip up the lambda_function.py and put it on an AWS S3 bucket you have permissions on.  When you run the template you’ll be prompted to provide the S3 bucket name, the name of the ZIP file, the Log Analytics Workspace ID and key, and the name you want the API to assign to the log in the workspace.

The Python code backing the solution is pretty simple.  It uses all standard Python modules except for the boto3 module used to interact with AWS.

import json
import logging
import re
import csv
import boto3
import os
import hmac
import base64
import hashlib
import datetime

from io import StringIO
from datetime import datetime
from botocore.vendored import requests

The first function in the code parses the ARN (Amazon Resource Name) to extract the AWS account number.  This information is later included in the log data written to Azure.

# Parse the IAM User ARN to extract the AWS account number
def parse_arn(arn_string):
    acct_num = re.findall(r'(?<=:)[0-9]{12}',arn_string)
    return acct_num[0]

The second function uses the strftime method to transform the timestamp returned from the AWS API to a format that the Azure Monitor API will detect as a timestamp and make that particular field for each record in the Log Analytics Workspace a datetime type.

# Convert timestamp to one more compatible with Azure Monitor
def transform_datetime(awsdatetime):
transf_time = awsdatetime.strftime("%Y-%m-%dT%H:%M:%S")
return transf_time

The next function queries the AWS API for a listing of AWS IAM Users setup in the account and creates dictionary object representing data about that user. That object is added to a list which holds each object representing each user.

# Query for a list of AWS IAM Users
def query_iam_users():
    
    todaydate = (datetime.now()).strftime("%Y-%m-%d")
    users = []
    client = boto3.client(
        'iam'
    )

    paginator = client.get_paginator('list_users')
    response_iterator = paginator.paginate()
    for page in response_iterator:
        for user in page['Users']:
            user_rec = {'loggedDate':todaydate,'username':user['UserName'],'account_number':(parse_arn(user['Arn']))}
            users.append(user_rec)
    return users

The query_access_keys function queries the AWS API for a listing of the access keys that have been provisioned the AWS IAM User as well as the status of those keys and some metrics around the usage.  The resulting data is then added to a dictionary object and the object added to a list.  Each item in the list represents a record for an AWS access id.

# Query for a list of access keys and information on access keys for an AWS IAM User
def query_access_keys(user):
    keys = []
    client = boto3.client(
        'iam'
    )
    paginator = client.get_paginator('list_access_keys')
    response_iterator = paginator.paginate(
        UserName = user['username']
    )

    # Get information on access key usage
    for page in response_iterator:
        for key in page['AccessKeyMetadata']:
            response = client.get_access_key_last_used(
                AccessKeyId = key['AccessKeyId']
            )
            # Santize key before sending it along for export

            sanitizedacctkey = key['AccessKeyId'][:4] + '...' + key['AccessKeyId'][-4:]
            # Create new dictonionary object with access key information
            if 'LastUsedDate' in response.get('AccessKeyLastUsed'):

                key_rec = {'loggedDate':user['loggedDate'],'user':user['username'],'account_number':user['account_number'],
                'AccessKeyId':sanitizedacctkey,'CreateDate':(transform_datetime(key['CreateDate'])),
                'LastUsedDate':(transform_datetime(response['AccessKeyLastUsed']['LastUsedDate'])),
                'Region':response['AccessKeyLastUsed']['Region'],'Status':key['Status'],
                'ServiceName':response['AccessKeyLastUsed']['ServiceName']}
                keys.append(key_rec)
            else:
                key_rec = {'loggedDate':user['loggedDate'],'user':user['username'],'account_number':user['account_number'],
                'AccessKeyId':sanitizedacctkey,'CreateDate':(transform_datetime(key['CreateDate'])),'Status':key['Status']}
                keys.append(key_rec)
    return keys

The next two functions contain the code that creates and submits the request to the Azure Monitor API.  The product team was awesome enough to provide some sample code in the in the public documentation for this part.  The code is intended for Python 2 but only required a few small changes to make it compatible with Python 3.

Let’s first talk about the build_signature function.  At this time the API uses HTTP request signing using the Log Analytics Workspace id and key to authenticate to the API.  In short this means you’ll have two sets of shared keys per workspace, so consider the workspace your authorization boundary and prioritize proper key management (aka use a different workspace for each workload, track key usage, and rotate keys as your internal policies require).

Breaking down the code below, we the string that will act as the header includes the HTTP method, length of request content, a custom header of x-ms-date, and the REST resource endpoint.  The string is then converted to a bytes object, and an HMAC is created using SHA256 which is then base-64 encoded.  The result is the authorization header which is returned by the function.

def build_signature(customer_id, shared_key, date, content_length, method, content_type, resource):
    x_headers = 'x-ms-date:' + date
    string_to_hash = method + "\n" + str(content_length) + "\n" + content_type + "\n" + x_headers + "\n" + resource
    bytes_to_hash = bytes(string_to_hash, encoding="utf-8")  
    decoded_key = base64.b64decode(shared_key)
    encoded_hash = base64.b64encode(
        hmac.new(decoded_key, bytes_to_hash, digestmod=hashlib.sha256).digest()).decode()
    authorization = "SharedKey {}:{}".format(customer_id,encoded_hash)
    return authorization

Not much needs to be said about the post_data function beyond that it uses the Python requests module to post the log content to the API.  Take note of the limits around the data that can be included in the body of the request.  Key takeaways here is if you plan pushing a lot of data to the API you’ll need to chunk your data to fit within the limits.

def post_data(customer_id, shared_key, body, log_type):
    method = 'POST'
    content_type = 'application/json'
    resource = '/api/logs'
    rfc1123date = datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
    content_length = len(body)
    signature = build_signature(customer_id, shared_key, rfc1123date, content_length, method, content_type, resource)
    uri = 'https://' + customer_id + '.ods.opinsights.azure.com' + resource + '?api-version=2016-04-01'

    headers = {
        'content-type': content_type,
        'Authorization': signature,
        'Log-Type': log_type,
        'x-ms-date': rfc1123date
    }

    response = requests.post(uri,data=body, headers=headers)
    if (response.status_code >= 200 and response.status_code <= 299):
        print("Accepted")
    else:
        print("Response code: {}".format(response.status_code))

Last but not least we have the lambda_handler function which brings everything together. It first gets a listing of users, loops through each user to information about the access id and secret keys usage, creates a log record containing information about each key, converts the data from a dict to a JSON string, and writes it to the API. If the content is successfully delivered, the log for the Lambda will note that it was accepted.

def lambda_handler(event, context):

    # Enable logging to console
    logging.basicConfig(level=logging.INFO,format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')

    try:

        # Initialize empty records array
        #
        key_records = []
        
        # Retrieve list of IAM Users
        logging.info("Retrieving a list of IAM Users...")
        users = query_iam_users()

        # Retrieve list of access keys for each IAM User and add to record
        logging.info("Retrieving a listing of access keys for each IAM User...")
        for user in users:
            key_records.extend(query_access_keys(user))
        # Prepare data for sending to Azure Monitor HTTP Data Collector API
        body = json.dumps(key_records)
        post_data(os.environ['WorkspaceId'], os.environ['WorkspaceKey'], body, os.environ['LogName'])

    except Exception as e:
        logging.error("Execution error",exc_info=True)

Once the data is delivered, it will take a few minutes for it to be processed and appear in the Log Analytics Workspace. In my tests it only took around 2-5 minutes, but I wasn’t writing much data to the API.  After the data processes you’ll see a new entry under the listing of Custom Logs in the Log Analytics Workspace.  The entry will be the log name you picked and with a _CL at the end.  Expanding the entry will display the columns that were created based upon the log entry.  Note that the columns consumed from the data you passed will end with an underscore and a character denoting the data type.

mylog

Now that the data is in the workspace, I can start querying it and creating some visualizations.  Azure Monitor uses the Kusto Query Language (KQL).  If you’ve ever created queries in Splunk, the language will feel familiar.

The log I created in AWS and pushed to the API has the following schema.  Note the addition of the underscore followed by a character denoting the column data type.

  • logged_Date (string) – The date the Lambda ran
  • user_s (string) – The AWS IAM User the key belongs to
  • account_number_s (string) – The AWS Account number the IAM Users belong to
  • AccessKeyId (string) – The id of the access key associated with the user which has been sanitized to show just the first 4 and last 4 characters
  • CreateDate_t (timestamp) – The date and time when the access key was created
  • LastUsedDate_t (timestamp) – The date and time the key was last used
  • Region_s (string) – The region where the access key was last used
  • Status_s (string) – Whether the key is enabled or disabled
  • ServiceName_s (string) – The AWS service where the access key was last used

In addition to what I’ve pushed, Azure Monitor adds a TimeGenerated field to each record which is the time the log entry was sent to Azure Monitor.  You can override this behavior and provide a field for Azure Monitor to use for this if you like (see here).  There are some other miscellaneous fields are inherited from whatever schema the API is drawing from.  These are fields such as TenantId and SourceSystem, which in this case is populated with RestAPI.

Since my personal AWS environment is quite small and the AWS IAM Users usage are very limited, my data sets aren’t huge.  To address this I created a number of IAM Users with access keys for the purpose blog.  I’m getting that out of the way so my AWS friends don’t hate on me. 🙂

One of core best practices in key management with shared keys is to ensure you rotate them.  The first data point I wanted to extract was which keys that existed in my AWS account were over 90 days old.  To do that I put together the following query:

AWS_Access_Key_Report_CL
| extend key_age = datetime_diff('day',now(),CreateDate_t)
| project Age=key_age,AccessKey=AccessKeyId_s, User=user_s
| where Age > 90
| sort by Age

Let’s walk through the query.  The first line tells the query engine to run this query against the AWS_Access_Key_Report_CL.  The next line creates a new field that contains the age of the key by determining the amount of time that has passed between the creation date of the key and today’s date.  The line after that instructs the engine to pull back only the key_age field I just created and the AccessKeyId_s, user_s , and status_s fields.  The results are then further culled down to pull only records where the key age is greater than 90 days and finally the results are sorted by the age of the key.

query1

Looks like it’s time to rotate that access key in use by Azure AD. 🙂

I can then pin this query to a new shared dashboard for other users to consume.  Cool and easy right?  How about we create something visual?

Looking at the trends in access key creation can provide some valuable insights into what is the norm and what is not.  Let’s take a look a the metrics for key creation (of the keys still exist in an enabled/disabled state).  For that I’m going to use the following query:

AWS_Access_Key_Report_CL
| make-series AccessKeys=count() default=0 on CreateDate_t from datetime(2019-01-01) to datetime(2020-01-01) step 1d

In this query I’m using the make-series operator to count the number of access keys created each day and assigning a default value of 0 if there are no keys created on that date.  The result of the query isn’t very useful when looking at it in tabular form.

query2.PNG

By selecting the Line drop down box, I can transform the date into a line grab which shows me spikes of creation in log creation.  If this was real data, investigation into the spike of key creations on 6/30 may be warranted.

quer2_2.PNG

I put together a few other visuals and tables and created a custom dashboard like the below.  Creating the dashboard took about an hour so, with much of the time invested in figuring out the query language.

dashboard

What you’ve seen here is a demonstration of the power and simplicity of Azure Monitor.  By adding a simple to use API, Microsoft has exponentially increased the agility of the tool by allowing it to become a single pane of glass for monitoring across clouds.   It’s also worth noting that Microsoft’s BI (business intelligence) tool Power BI has direct integration with Azure Log Analytics.  This allows you to pull that log data into PowerBI and perform more in-depth analysis and to create even richer visualizations.

Well folks, I hope you’ve found this series of value.  I really enjoyed creating it and already have a few additional use cases in mind.  Make sure to follow me on Github as I’ll be posting all of the code and solutions I put together there for your general consumption.

Have a great day!

 

 

Visualizing AWS Logging Data in Azure Monitor – Part 1

Visualizing AWS Logging Data in Azure Monitor – Part 1

Hi folks!

2019 is more than halfway over and it feels like it has happened in a flash.  It’s been an awesome year with tons of change and even more learning.  I started the year neck deep in AWS and began transitioning into Azure back in April when I joined on with Microsoft.  Having the opportunity to explore both clouds and learn the capabilities of each offering has been an amazing experience that I’m incredibly thankful for.  As I’ve tried to do for the past 8 years, I’m going to share some of those learning with you.  Today we’re going to explore one of the capabilities that differentiates Azure from its competition.

One of the key takeaways I’ve had from my experiences with AWS and Microsoft is enterprises have become multicloud.  Workloads are quickly being spread out among public and private clouds.  While the business benefits greatly from a multicloud approach where workloads can go to the most appropriate environment where the cost, risks, and time tables best suit it, it presents a major challenge to the technical orchestration behind the scenes.  With different APIs (application programmatic interface), varying levels of compliance, great and not so great capabilities around monitoring and alerting, and a major industry gap in multicloud skills sets, it can become quite a headache to successfully execute this approach.

One area Microsoft Azure differentiates itself is its ability to easy the challenge of monitoring and alerting in a multicloud environment.  Azure Monitor is one of the key products behind this capability.  With this post I’m going to demonstrate Azure Monitor’s capabilities in this realm by walking you through a pattern of delivering, visualizing, and analyzing log data collected from AWS.  The pattern I’ll be demonstrating is reusable for most any cloud (and potentially on-premises) offering.  Now sit back, put your geek hat on, and let’s dive in.

First I want to briefly talk about what Azure Monitor is?  Azure Monitor is a solution which brings together a collection of tools that can be used to collect and analyze the large abundance of telemetry available today.  This telemetry could be metrics in regards to a virtual machine’s performance or audit logs for Azure Active Directory.  The product team has put together the excellent diagram below which explains the architecture of the solution.

As you can see from the inputs on the left, Azure Monitor is capable of collecting and analyzing data from a variety of sources.  You’ll find plenty of documentation the product team has made publicly available on the five gray items, so I’m going to instead focus on custom sources.

For those of you who have been playing in the AWS pool, you can think of Azure Monitor as something similar (but much more robust) to CloudWatch Metrics and CloudWatch Logs.  I know, I know, you’re thinking I’ve drank the Microsft Kool-Aid.

koolaid.jpg

While I do love to reminisce about cold glasses of Kool-Aid on hot summers in the 1980s, I’ll opt to instead demonstrate it in action and let you decide for yourself.  To do this I’ll be leveraging the new API Microsoft introduced.  The Azure Monitor HTTP Data Collector API was introduced a few months back and provides the capability of delivering log data to Azure where it can be analyzed by Azure Monitor.

With Azure Monitor logs are stored in an Azure resource called a Log Analytics Workspace.  For you AWS folk, you can think of a Log Analytics Workspace as something similar to CloudWatch Log Groups where the data stored in a logical boundary where the data shares a retention and authorization boundary.  Logs are sent to the API in JSON format and are placed in the Log Analytics Workspace you specify.  A high level diagram of the flow can be seen below.

So now that you have a high level understanding of what Azure Monitor is, what it can do, and how the new API works, let’s talk about the demonstration.

If you’ve used AWS you’re very familiar with the capabilities CloudWatch Metrics Dashboards and the basic query language available to analyze CloudWatch Logs.  To perform more complex queries and to create deeper visualizations, third-party solutions are often used such as ElasticSearch and Kibana.  While these solutions work, they can be complex to implement and can create more operational overhead.

When a peer informed me about the new API a few weeks back, I was excited to try it out.  I had just started to use Azure Monitor to put together some dashboards for my personal Office 365 and Azure subscriptions and was loving the power and simplicity of the analytics component of the solution.  The new API opened up some neat opportunities to pipe logging data from AWS into Azure to create a single dashboard I could reference for both clouds.  This became my use case and demonstration of the pattern of delivering logs from a third party to Azure Monitor with some simple Python code.

The logs I chose to deliver to the API were logs containing information surrounding the usage of AWS access ids and keys.  I had previously put together some code to pull this data and write it to an S3 bucket.

Let’s take a look at the design of the solution.  I had a few goals I wanted to make sure to hit if possible.  My first goal was to keep the code simple.  That mean limiting the usage of third-party modules and avoid over complicating the implementation.

My second goal was to limit the usage of static credentials.  If I ran the code in Azure, I’d need to setup an AWS IAM User and provision an access id and secret key.  While I’m aware of the workaround to use SAML authentication, I’m not a fan because in my personal opinion, it’s using SAML in such a way you are trying to hammer in a square peg in a round hole.  Sure you can do it, but you really shouldn’t unless you’re out of options.  Additionally, the solution requires some fairly sensitive permissions in AWS such as IAM:ListAccessKeys so the risk of the credentials being compromised could be significant.  Given the risks and constraints of authentication methods to the AWS API, I opted to run my code as a Lambda and follow AWS best practices and assign the Lambda an IAM role.

On the Azure side, the Azure Monitor API for log delivery requires authentication using the Workspace ID and Workspace key.   Ideally these would be encrypted and stored in AWS Secrets Manager or as a secure parameter in Parameter Store, but I decided to go the easy route and store them as environment variables for the Lambda and to encrypt them with AWS KMS.  This cut back on the code and made the CloudFormation templates easier to put together.

With the decisions made the resulting design is pictured above.

Capture.PNG

I’m going to end the post here and save the dive into implementation and code for the next post.  In the meantime, take a read through the Azure Monitor documentation and familiarize yourself with the basics.  I’ve also put the whole solution up on Github if you’d like to follow along for next post.

See you next post!

 

Capturing and Visualizing Office 365 Security Logs – Part 1

Welcome back again my fellow geeks!

I’ve been busy over the past month nerding out on some pet projects.  I thought it would be fun to share one of those pet projects with you.  If you had a chance to check out my last series, I walked through my first Python experiment which was to write a re-usable tool that could be used to pull data from Microsoft’s Graph API (Microsoft Graph).

For those of you unfamiliar with Microsoft Graph, it’s the Restful API (application programming interface) that is used to interact with Microsoft cloud offerings such as Office 365 and Azure.  You’ve probably been interacting with it without even knowing it if through the many PowerShell modules Microsoft has released to programmatically interact with those services.

One of the many resources which can be accessed through Microsoft Graph are Azure AD (Active Directory) security and audit reports.  If you’re using Office 365, Microsoft Azure, or simply Azure AD as an identity platform for SSO (single sign-on) to third-party applications like SalesForce, these reports provide critical security data.  You’re going to want to capture them, store them, and analyze them.  You’re also going to have to account for the window that Microsoft makes these logs available.

The challenge is they are not available via the means logs have traditionally been captured on-premises by using syslogd, installing an SIEM agent, or even Windows Event Log Forwarding.  Instead you’ll need to take a step forward in evolving the way you’re used to doing things. This is what moving to the cloud is all about.

Microsoft allows you to download the logs manually via the Azure Portal GUI (graphical user interface) or capture them by programmatically interacting with Microsoft Graph.  While the former option may work for ad-hoc use cases, it doesn’t scale.  Instead we’ll explore the latter method.

If you have an existing enterprise-class SIEM (Security Information and Event Management) solution such as Splunk, you’ll have an out of box integration.  However, what if you don’t have such a platform, your organization isn’t yet ready to let that platform reach out over the Internet, or you’re interested in doing this for a personal Office 365 subscription?  I fell into the last category and decided it would be an excellent use case to get some experience with Python, Microsoft Graph, and take advantage of some of the data services offered by AWS (Amazon Web Services).   This is the use case and solution I’m going to cover in this post.

Last year I had a great opportunity to dig into operational and security logs to extract useful data to address some business problems.  It was my first real opportunity to examine large amounts of data and to create different visualizations of that data to extract useful trends about user and application behavior.  I enjoyed the hell out of it and thought it would be fun to experiment with my own data.

I decided that my first use case would be Office 365 security logs.  As I covered in my last series my wife’s Office 365 account was hacked.  The damage was minor as she doesn’t use the account for much beyond some crafting sites (she’s a master crocheter as you can see from the crazy awesome Pennywise The Clown she made me for Christmas).

img_4301

The first step in the process was determining an architecture for the solution.  I gave myself a few requirements:

  1. The solution must not be dependent on my home lab infrastructure
  2. Storage for the logs must be cheap and readily available
  3. The credentials used in my Python code needs to be properly secured
  4. The solution must be automated and notify me of failures
  5. The data needs to be available in a form that it can be examined with an analytics solution

Based upon the requirements I decided to go the serverless (don’t hate me for using that tech buzzword 🙂 ) route.  My decisions were:

  • AWS Lambda would run my code
  • Amazon CloudWatch Events would be used to trigger the Lambda once a day to download the last 24 hours of logs
  • Amazon S3 (Simple Storage Service) would store the logs
  • AWS Systems Manager Parameter Store would store the parameters my code used leveraging AWS KMS (Key Management Service) to encrypt the credentials used to interact with Microsoft Graph
  • Amazon Athena would hold the schema for the logs and make the data queryable via SQL
  • Amazon QuickSight would be used to visualize the data by querying Amazon Athena

The high level architecture is pictured below.

untitled

I had never done a Lambda before so I spent a few days looking at some examples and doing the typical Hello World that we all do when we’re learning something new.  From there I took the framework of Python code I put together for general purpose queries to the Microsoft Graph, and adapted it into two Lambdas.  One Lambda would pull Sign-In logs while the other would pull Audit Logs.  I also wanted a repeatable way to provision the Lambdas to share with others and get some CloudFormation practice and brush up on my very dusty Bash scripting.   The results are located here in one of my Github repos.

I’m going to stop here for this post because we’ve covered a fair amount of material.  Hopefully after reading this post you understand that you have to take a new tact with getting logs for cloud-based services such as Azure AD.  Thankfully the cloud has brought us a whole new toolset we can use to automate the extraction and storage of those logs in a simple and secure manner.

In my next post I’ll walk through how I used Athena and QuickSight to put together some neat dashboards to satisfy my nerdy interests and get better insight into what’s happening on a daily basis with my Office 365 subscription.

See you next post and go Pats!

A Comparison – AWS Managed Microsoft AD and Azure Active Directory Domain Services

A Comparison – AWS Managed Microsoft AD and Azure Active Directory Domain Services

Over the past year I’ve done deep dives into both Amazon’s AWS Managed Microsoft Active Directory and Microsoft’s Azure Active Directory Domain Services.  These services represent each vendor’s offering of a managed Windows Active Directory (AD) service.  I extensively covered the benefits of a service over the course of the posts, so today I’m going to cover the key features of each service.  I’m also going to include two tables.  One table will outline the differences in general features while the other outlines the differences in security-related features.

Let’s hit on the key points first.

  • Amazon provides a legacy (Windows AD is legacy folks) managed service while Microsoft provides a modernized service (Azure AD) which has been been integrated with a legacy service.
  • Microsoft synchronizes users, passwords hashes, and groups from the Azure AD to a managed instance of Windows Active Directory.  The reliance on this synchronization means the customer has to be comfortable synchronizing both directory data and password hashes to Azure AD.  Amazon does not require any data be synchronized.
  • Amazon provides the capability to leverage the identities in the managed instance of Windows AD or in a forest that has a trust with the managed instance to be leveraged in managing AWS resources.  In this instance Amazon is taking a legacy service and enabling it for management of the modern cloud management plane.
  • The pricing model for the services differs where Amazon bills on a per domain controller basis while Microsoft bills on the number of objects in the directory.
  • Amazon’s service is eligible to be used in solutions that require PCI DSS Level 1 or HIPAA.
  • Both services use a delegated model where the customer has full control over an OU rather the directory itself.  Highly privileged roles such as Schema Admin, Enterprise Admins, and Domain Admins are maintained by the cloud provider.
  • Both services provide LDAP for legacy applications customers may be trying to lift and shift.  Microsoft limits LDAP to read operations while Amazon supports both read and write operations.
  • Both services support LDAPS.  At this time Amazon requires an instance of Active Directory Certificate Services be deployed to act as a Certificate Authority and provide certificates to the managed domain controllers.
  • Both services do not allow the customer to modify the Default Domain Policy or Default Domain Controller Policies.  This means the customer cannot modify the password or lockout policy applied to the domain.  Amazon provides a method of enforcing custom password and lockout policies through Fine Grained Password Policies.  Additionally, the customer does not have the ability to harden the OS of the domain controllers for either service so it is important to review the vendor documentation.
  • Amazon’s service supports Active Directory forest trusts and external trusts.  Microsoft’s service doesn’t support trusts at this time.

Here is a table showing the comparison of general features:

Features AWS Managed Microsoft AD Azure Active Directory Domain Services
Cost Basis Number of Domain Controllers Number of Directory Objects
Schema Extensions Yes, with limitations No
Trusts Yes, with limitations No
Domain Controller Log Access Security and DNS Server Event Logs No
DNS Management Yes Yes
Snapshots Yes No
Limit of Managed Forests 10 per account 1 per Azure AD tenant
Supports being used on-premises Yes with Direct Connect or VPN No, within VNet only
Scaled By Customer Yes No
Max number of Domain Controller 20 per directory Unknown how service is scaled

Here is a table of security capabilities:

Features AWS Managed Microsoft AD Azure Active Directory Domain Services
Requires Directory Synchronization No Yes, including password
Fine-Grained Password Policies Yes, limited to seven No
Smart Card Authentication Not native, requires RADIUS No
LDAPS Yes, with special requirements Yes, but LDAP operations are limited to read
LDAPS Protocols SSLv3, TLS 1.0, TLS 1.2 TLS 1.0, TLS 1.2
LDAPS Cipher Suites RC4, 3DES, AES128, AES256 RC4, 3DES, AES128, AES256
Kerberos Delegation Account-Based and Resource-Based Resource-Based
Kerberos Encryption RC4, AES128, AES256 RC4, AES128, AES256
NTLM Support NTLMv1, NTLMv2 NTLMv1, NTLMv2

Well folks that sums it up.  As you can see from both of the series as well as this summary post both vendors have taken very different approaches in providing the service.  It will be interesting to see how these offerings evolve over the next few years.  As much as we’d love to see Windows Active Directory go away, it will still be here for years to come.

Until next time my fellow geeks!

AWS Managed Microsoft AD Deep Dive Part 7 – Trusts and Domain Controller Event Logs

AWS Managed Microsoft AD Deep Dive  Part 7 – Trusts and Domain Controller Event Logs

Welcome back fellow geek.  Today I’m continuing my deep dive series into AWS Managed Microsoft AD.  This will represent the seventh post in the series and I’ve covered some great content over the series including:

  1. An overview of the service
  2. How to setup the service
  3. The directory structure, pre-configured security principals, group policies and the delegated security model
  4. How to configure LDAPS and the requirements that pop up due to Amazon’s delegation model
  5. Security of the service including supported secure transport protocols, ciphers, and authentication protocols
  6. How do schema extensions work and what are the limitations

Today I’m going cover three additional capabilities of AWS Managed Microsoft AD which includes the creation of trusts, access to the Domain Controller event logs, and scalability.

I’ll first cover the capabilities around Active Directory trusts.  Providing this capability opens up the possibility a number of scenarios that aren’t possible in managed Windows Active Directory (Windows AD) services that don’t support trusts such as Microsoft’s Azure Active Directory Domain Services.  Some of the scenarios that pop up in my head are resource forest, trusts with trusted partners to maintain collaboration for legacy applications (applications dependent on legacy protocols such as Kerberos/NTLM/LDAP), trusts between development, QA, and production forests, and the usage of features features such as selective authentication to mitigate the risk to on-premises infrastructure.

For many organizations, modernization of an entire application catalog isn’t feasible but those organizations still want to take advantage of the cost and security benefits of cloud services.  This is where AWS Managed Microsoft AD can really shine.  It’s capability to support Active Directory forests trusts opens up the opportunity for those organizations to extend their identity boundary to the cloud while supporting legacy infrastructure.  Existing on-premises core infrastructure services such as PKI and SIEM can continue to be used and even extended to monitor the infrastructure using the managed Windows AD.

As you can see this is an extremely powerful capability and makes the service a good for almost every Windows AD scenario.  So that’s all well and good, but if you wanted marketing material you’d be reading the official documentation right?  You came here for the deep dive, so let’s get into it.

The first thing that popped into my mind was the question as to how Amazon would be providing this capability in a managed service model.  Creating a forest trust typically requires membership in privileged groups such as Enterprise Admins and Domain Admins, which obviously isn’t possible in a manged service.  I’m sure it’s possible to delegate the creation of Active Directory trusts and DNS conditional forwarders with modifications of directory permissions and possibly user rights, but there’s a better way.  What is this better way you may be asking yourself?  Perhaps serving it up via the Directory Services console in the same way schema modifications are served up?

Let’s walk through the process of setting up an Active Directory forest trust with a customer-managed traditional implementation of Windows Active Directory and an instance of AWS Managed Microsoft AD.  For this I’ll be leveraging my home Hyper-V lab.  I’m actually in the process of rebuilding it so there isn’t much there right now.  The home lab consists of two virtual machines, one named JOG-DC running Windows Server 2016 and functions as a domain controller (AD DS) and certificate authority (AD CS) for the journeyofthegeek.com Active Directory forest.  The other virtual machine is named named JOG-CLIENT, runs Windows 10, and is joined to the journeyofthegeek.com domain.  I’ve connected my VPC with my home lab using AWS’s Managed VPN to setup a site-to-site IPSec VPN connection with my local pfSense box.

7awsadds1.png

Prior to setting up the trusts there are a few preparatory steps that need to be completed.  The steps will be familiar to those of you who have established forests trusts across firewalled network segments.  At a high level, you’ll want to perform the following tasks:

  1. Ensure the appropriate ports are opened between the two forests.
  2. Ensure DNS resolution between the two forests is established

For the first step I played it lazy since this is is a temporary configuration (please don’t do this in production).   I allowed all traffic from the VPC address range to my lab environment by modifying the firewall rules on my pfSense box.  On the AWS side I needed to adjust the traffic rules for the security group SERVER01 is in as well as the security group for the managed domain controllers.

7awsadds2.png

To establish DNS resolution between the two forests I’ll be using conditional forwarders setup within each forest.  Setting the conditional forwarders up in the journeyofthegeek.com forest means I have to locate the IP addresses of the managed domain controllers in AWS.  There are a few ways you could do it, but I went to the AWS Directory Services Console and selected the geekintheweeds.com directory.

7awsadds3

On the Directory details section of the console the DNS addresses list the IP addresses the domain controllers are using.

7awsadds4.png

After creating the conditional forwarder in the DNS Management MMC in the journeyofthegeek.com forest, DNS resolution of a domain controller from geekintheweeds.com was successful.

7awsadds5.png

I next created the trust in the journeyofthegeek.com domain ensuring to select the option to create the trust in this domain only and recording the trust password using the Active Directory Domains and Trusts.  We can’t create the trusts in both domains since we don’t have an account with the appropriate privileges in the AWS managed domain.

Next up I bounced back over to the Directory Services console and selected the geekintheweeds.com directory.  From there I selected the Network & security tab to open the menu needed to create the trust.

7awsadds6.png

From here I clicked the Add trust relationship button which brings up the Add a trust relationship menu.  Here I filled in the name of the domain I want to establish the trust with, the trust password I setup in the journeyofthegeek.com domain, select a two-way trust, and add an IP that will be used within configuration of the conditional forwarder setup by the managed service.

7awsadds7.png

After clicking the Add button the status of the trust is updated to Creating.

7awsadds8.png

The process takes a few minutes after which the status reports as verified.

7awsadds9.png

Opening up the Active Directory Users and Computers (ADUC) MMC in the journeyofthegeek.com domain and selecting the geekintheweeds.com domain successfully displays the directory structure.  Trying the opposite in the geekintheweeds.com domain works correctly as well.  So our two-way trust has been created successfully.  We would now have the ability to setup any of the scenarios I talked about earlier in the post including a resource forest or leveraging the managed domain as a primary Windows AD service for on-premises infrastructure.

The second capability I want to briefly touch on is the ability to view the Security Event Log and DNS Server logs on the managed domain controllers.  Unlike Microsoft’s managed Windows AD service, Amazon provides ongoing access to the Security Event Log and DNS Server Log.  The logs can be viewed using the Event Log MMC from a domain-joined machine or programmatically with PowerShell.  The group policy assigned to the Domain Controllers OU enforces a maximum event log size of 256MB but Amazon also archives a year’s worth of logs which can be requested in the event of an incident.  The lack of this capability was a big sore spot for me when I looked at Azure Active Directory Domain Services.  It’s great to see Amazon has identified this critical use case.

Last but definitely not least, let’s quickly cover the scalability of the service.  Follow Microsoft best practices and you can take full advantage of scaling horizontally with the click of a single button.  Be aware that the service only scales horizontally and not vertically.  If you have applications that don’t follow best practices and point to specific domain controllers or perform extremely inefficient LDAP queries (yes I’m talking to you developers who perform searches using front and rear-facing wildcards and use LDAP_MATCHING_RULE_IN_CHAIN filters) horizontal scaling isn’t going to help you.

Well folks that rounds out this entry into the series.  As we saw in the post Amazon has added key capabilities that Microsoft’s managed service is missing right now.  This makes AWS Managed Microsoft AD the more versatile of the two services and more than likely a better fit in almost any scenario where there is a reliance on Windows AD.

In my final posts of the series I’ll provide a comparison chart showing the differing capabilities of both AWS and Microsoft’s services.

See you next post!

 

 

 

AWS Managed Microsoft AD Deep Dive Part 6 – Schema Modifications

AWS Managed Microsoft AD Deep Dive  Part 6 – Schema Modifications

Yes folks, we’re at the six post for the series on AWS Managed Microsoft AD (AWS Managed AD.  I’ve covered a lot of material over the series including an overview, how to setup the service, the directory structure, pre-configured security principals, group policies, and the delegated security model, how to configure LDAPS in the service and the implications of Amazon’s design, and just a few days ago looked at the configuration of the security of the service in regards to protocols and cipher suites.  As per usual, I’d highly suggest you take a read through the prior posts in the series before starting on this one.

Today I’m going to look the capabilities within the AWS Managed AD to handle Active Directory schema modifications.  If you’ve read my series on Microsoft’s Azure Active Directory Domain Services (AAD DS) you know that the service doesn’t support the schema modifications.  This makes Amazon’s service the better offering in an environment where schema modifications to the standard Windows AD schema are a requirement.  However, like many capabilities in a managed Windows Active Directory (Windows AD) service, limitations are introduced when compared to a customer-run Windows Active Directory infrastructure.

If you’ve administered an Active Directory environment in a complex enterprise (managing users, groups, and group policies doesn’t count) you’re familiar with the butterflies that accompany the mention of a schema change.  Modifying the schema of Active Directory is similar to modifying the DNA of a living being.  Sure, you might have wonderful intentions but you may just end up causing the zombie apocalypse.  Modifications typically mean lots of application testing of the schema changes in a lower environment and a well documented and disaster recovery plan (you really don’t want to try to recover from a failed schema change or have to back one out).

Given the above, you can see the logic of why a service provider providing a managed Windows AD service wouldn’t want to allow schema changes.  However, there very legitimate business justifications for expanding the schema (outside your standard AD/Exchange/Skype upgrades) such as applications that need to store additional data about a security principal or having a business process that would be better facilitated with some additional metadata attached to an employee’s AD user account.  This is the market share Amazon is looking to capture.

So how does Amazon provide for this capability in a managed Windows AD forest?  Amazon accomplishes it through a very intelligent method of performing such a critical activity.  It’s accomplished by submitting an LDIF through the AWS Directory Service console.  That’s right folks, you (and probably more so Amazon) doesn’t have to worry about you as the customer having to hold membership in a highly privileged group such as Schema Admins or absolutely butchering a schema change by modifying something you didn’t intend to modify.

Amazon describes three steps to modifying the schema:

  1. Create the LDIF file
  2. Import the LDIF file
  3. Verify the schema extension was successful

Let’s review each of the steps.

In the first step we have to create a LDAP Data Interchange Format (LDIF) file.  Think of the LDIF file as a set of instructions to the directory which in this could would be an add or modify to an object class or attribute.  I’ll be using a sample LDIF file I grabbed from an Oracle knowledge base article.  This schema file will add the attributes of unixUserName, unixGroupName, and unixNameIinfo to the default Active Directory schema.

To complete step one I dumped the contents below into an LDIF file and saved it as schemamod.ldif.

dn: CN=unixUserName, CN=Schema, CN=Configuration, DC=example, DC=com
changetype: add
attributeID: 1.3.6.1.4.1.42.2.27.5.1.60
attributeSyntax: 2.5.5.3
isSingleValued: TRUE
searchFlags: 1
lDAPDisplayName: unixUserName
adminDescription: This attribute contains the object's UNIX username
objectClass: attributeSchema
oMSyntax: 27

dn: CN=unixGroupName, CN=Schema, CN=Configuration, DC=example, DC=com
changetype: add
attributeID: 1.3.6.1.4.1.42.2.27.5.1.61
attributeSyntax: 2.5.5.3
isSingleValued: TRUE
searchFlags: 1
lDAPDisplayName: unixGroupName
adminDescription: This attribute contains the object's UNIX groupname
objectClass: attributeSchema
oMSyntax: 27

dn:
changetype: modify
add: schemaUpdateNow
schemaUpdateNow: 1
-

dn: CN=unixNameInfo, CN=Schema, CN=Configuration, DC=example, DC=com
changetype: add
governsID: 1.3.6.1.4.1.42.2.27.5.2.15
lDAPDisplayName: unixNameInfo
adminDescription: Auxiliary class to store UNIX name info in AD
mayContain: unixUserName
mayContain: unixGroupName
objectClass: classSchema
objectClassCategory: 3
subClassOf: top

For the step two I logged into the AWS Management Console and navigated to the Directory Service Console.  Here we can see my instance AWS Managed AD with the domain name of geekintheweeds.com.

6awsadds1.png

I then clicked hyperlink on my Directory ID which takes me into the console for the geekintheweeds.com instance.  Scrolling down shows a menu where a number of operations can be performed.  For the purposes of this blog post, we’re going to focus on the Maintenance menu item.  Here we the ability to leverage AWS Simple Notification Service (AWS SNS) to create notifications for directory changes such as health changes where a managed Domain Controller goes down.  The second section is a pretty neat feature where we can snapshot the Windows AD environment to create a point-in-time copy of the directory we can restore.  We’ll see this in action in a few minutes.  Lastly, we have the schema extensions section.

6awsadds2.png

Here I clicked the Upload and update schema button and entered selected the LDIF file and added a short description.  I then clicked the Update Schema button.

6awsadds3.png

If you know me you know I love to try to break stuff.  If you look closely at the LDIF contents I pasted above you’ll notice I didn’t update the file with my domain name.  Here the error in the LDIF has been detected and the schema modification was cancelled.

6awsadds4.png

I went through made the necessary modifications to the file and tried again.  The LDIF processes through and the console updates to show the schema change has been initialized.

6awsadds5.png

Hitting refresh on the browser window updates the status to show Creating Snapshot.  Yes folks Amazon has baked into the schema update process a snapshot of the directory provide a fallback mechanism in the event of your zombie apocalypse.  The snapshot creation process will take a while.

6awsadds6.png

While the snapshot process, let’s discuss what Amazon is doing behind the scenes to process the LDIF file.  We first saw that it performs some light validation on the LDIF file, it then takes a snapshot of the directory, then applies to the changes to a single domain controller by selecting one as the schema master, removing it from directory replication, and applying the LDIF file using the our favorite old school tool LDIFDE.EXE.  Lastly, the domain controller is added back into replication to replicate the changes to the other domain controller and complete the changes.  If you’ve been administering Windows AD you’ll know this has appeared recommended best practices for schema updates over the years.

Once the process is complete the console updates to show completion of the schema installation and the creation of the snapshot.

6awsadds7.png

 

AWS Managed Microsoft AD Deep Dive Part 5 – Security

AWS Managed Microsoft AD Deep Dive  Part 5 – Security

You didn’t think I was done with AWS Managed Microsoft AD yet did you?  In this post I’m going to perform some tests to evaluate the protocols and ciphers suites available for LDAPS as well as checking out the managed Domain Controllers support for NTLMv1 and the cipher suites supported for Kerberos.  I’ll be using the same testing mechanisms I used when for my series on Microsoft Azure Active Directory Domain Services.

For those of you who are new to the series, I’ve been performing a deep dive review of AWS Managed Microsoft AD which is Amazon’s answer to a managed Windows Active Directory service.  In the first post I provided a high level overview of the service, in the second post I covered the setup of the service, the third post reviewed the directory structure, pre-configured security principals and group policies, and the delegated security model, and in the fourth entry I delved into how Amazon has managed to delegate configuration of LDAPS and the requirements that pop up due to their design choices.  I highly recommend you review those posts as well as my series on Microsoft Azure AD Domain Services if you’d like to compare the two services.

I’ve made a modification to my lab and have added another server named SERVER02 which will be running Linux.  The updated Visio looks like this.

labpart5

Server01 has been configured with the Windows Remote Server Administration Tools (RSAT) for Active Directory as well as holding the Active Directory Certificate Services (AD CS) role and being configured as a root Enterprise CA.  I’ve also done all the necessary configuration to distribute the certificates to the managed domain controllers and have successfully tested LDAPS.  Server02 will be used to test SSLv3 and NTLM.  I’ve modified the instance to use the domain controllers as DNS servers by overriding DHCP settings as outlined in this article.

The first thing I’m going to do is test to see if SSLv3 has been disabled on the managed domain controllers.  Recall that the managed Domain Controllers are running Windows Server 2012 R2 which has SSLv3 enabled by default.  It can be disabled by modifying the registry as documented here.  Believe it or not you can connect to the managed domain controllers registry via a remote registry connection.  Checking the registry location shows that the SSLv3 node hasn’t been created which is indicative of SSLv3 still being enabled.

5awsadds1.png

To be sure I checked it using the same method that I used in my Azure AD Domain Services post which is essentially compiling another version of openssh that supports SSLv3.  After the customized version was installed and I queried the Domain Controller over port 636 which you can see in the screenshot below that SSLv3 is still enabled.  Suffice to say this surprised me considering what I had seen so far in regards to the security of the service.  This will be a show stopper for some organizations in adopting the service especially since it isn’t configurable by the customer that I observed.

5awsadds2.png

So SSLv3 is enabled and presents a risk.  Have the cipher suites been hardened?  For this I’ll again use a tool developed by Thomas Pornin.   The options I’m using perform an exhaustive search across the typically offered cipher suites, space the connections out by 1 second, and start with a minimum of sslv3.

5awsadds3.png

The results are what I expected and mimic the results I saw when testing Azure AD Domain Services, minus the support for SSLv3 which Microsoft has disabled in their managed offering.  The supported cipher suites look to be the out of the box defaults for Server 2012 R2 and include RC4 and 3DES which are ciphers with known vulnerabilities.  The inability to disallow the ciphers might again be a show stopper for organizations with strict security requirements.

The Kerberos protocol is a critical component of Windows Active Directory providing the glue to hold the service together including (but in no way exhaustive) being behind the users authentication to a domain-joined machine, the single sign-on experience, and the ability to form trusts with other forests.  Given the importance of the protocol, it’s important to ensure its backed by strong ciphers.  The ciphers supported by a Windows Active Directory are configurable and can be checked by looking at the msDS-SupportedEncryptionTypes attribute of a domain controller object.

I next pulled up a domain controller object in ADUC and reviewed the attribute.  The attribute on the managed domain controllers has a value of 28, which is the default for Windows Server 2012 R2.  The value translates to support of the following cipher suites:

  • RC4_HMAC_MD5
  • AES128_CTS_HMAC_SHA1
  • AES256_CTS_HMAC_SHA1_96

These are the same cipher suites supported by Microsoft’s Azure AD Domain Services service.  In this case both vendors have left the configuration to the defaults.

Lastly, to emulate my testing Azure AD Domain Services, I tested support for NTLMv1.  By default Windows Server 2012 R2 supports NTLMv1 due to requirements for backwards compatibility. Microsoft has long recommended disabling NTLMv1 due to the documented issues with the security of the protocol. Sadly there are a large number of applications and devices in use in enterprises which still require NTLMv1.

To test the AWS managed domain controllers I’m going to use Samba’s smbclient package on SERVER02.  I’ll use the client to connect to the domain controller’s share from SERVER02 using NTLM.  I first installed the smbclient package by running:

yum install samba-client.

The client enforces the use NTLMV2 in smbclient by default so I needed to make some modifications to the global section of the smb.conf file by adding client ntlmv2 auth = no. This option disables NTLMv2 on smbclient and will force it to use NTLMv1.

5awsadds4.png

In order to see whether or not the client was using NTLMv1 when connecting to the domain controllers, I started a packet capture using tcpdump before initiating a connection with the smbclient.

5awsadds6.png

I then transferred the packet capture over to my Windows box with WinSCP, opened the capture with WireShark, and navigated to the packet containing the Session Setup Request.  In the parsed capture we don’t see an NTLMv2 Response which means NTLMv1 was used to authenticate to the domain controller indicating NTLMv1 is supported by the managed domain controllers.

 

5awsadds5

 

So what can we take from the findings of this analysis?

  1. Amazon has left the secure transport protocols to the defaults which means SSLv3 is supported.
  2. Amazon has left the cipher suites to the defaults which means both RC4 and 3DES cipher suites are supported for both LDAPS and Kerberos.

I’d really like to see Amazon address the support for SSLv3 as soon as possible.  There is no reason I can see why that shouldn’t be shut off by default.  Similar to my requests to Microsoft, I’d like to see Amazon allow the supported cipher suites to be configurable via the AWS Management Console.  These two changes would save organizations with strict security requirements, such as those in the public sector, to utilize the services without introducing significant risk (and audit headaches).

In my next post I’ll demonstrate how the service can be leveraged to provide Windows Active Directory service to on-premises machines or machines in another public cloud as well as exploring how to create a forest trust with the service.

See you next post!