Capturing and Visualizing Office 365 Security Logs – Part 2

Capturing and Visualizing Office 365 Security Logs – Part 2

Hello again my fellow geeks.

Welcome to part two of my series on visualizing Office 365 security logs.  In my last post I walked through the process of getting the sign-in and security logs and provided a link to some Lambda’s I put together to automate pulling them down from Microsoft Graph.  Recall that the Lambda stores the files in raw format (with a small bit of transformation on the time stamps) into Amazon S3 (Simple Storage Service).  For this demonstration I modified the parameters for the Lambda to download the 30 days of the sign-in logs and to store them in an S3 bucket I use for blog demos.

When the logs are pulled from  Microsoft Graph they come down in JSON (JavaScript Object Notation) format.  Love JSON or hate it is the common standard for exchanging information these days.  The schema for the JSON representation of the sign-in logs is fairly complex and very nested because there is a ton of great information in there.  Thankfully Microsoft has done a wonderful job of documenting the schema.  Now that we have the logs and the schema we can start working with the data.

When I first started this effort I had put together a Python function which transformed the files into a CSV using pipe delimiters.  As soon as I finished the function I wondered if there was an alternative way to handle it.  In comes Amazon Athena to the rescue with its Openx-JsonSerDe library.  After reading through a few blogs (great AWS blog here), StackOverflow posts, and the official AWS documentation I was ready to put something together myself.  After some trial and error I put together a working DDL (Data Definition Language) statement for the data structure.  I’ve made the DDLs available on Github.

Once I had the schema defined, I created the table in Athena.  The official AWS documentation does a fine job explaining the few clicks that are provided to create a table, so I won’t re-create that here.  The DDLs I’ve provided you above will make it a quick and painless process for you.

Let’s review what we’ve done so far.  We’ve setup a reoccurring job that is pulling the sign-in and audit logs via the API and is dumping all that juicy data into cheap object storage which we can further enforce lifecycle policies for.  We’ve then defined the schema for the data and have made it available via standard SQL queries.  All without provisioning a server and for pennies on the dollar.  Not to shabby!

At this point you can use your analytics tool of choice whether it be QuickSight, Tableau, PowerBi, or the many other tools that have flooded the market over the past few years.  Since I don’t make any revenue from these blog posts, I like to go the cheap and easy route of using Amazon QuickSight.

After completing the initial setup of QuickSight I was ready to go.  The next step was to create a new data set.  For that I clicked the Manage Data button and selected New Data Set.

Screen Shot 2019-01-31 at 8.57.15 PM.png

On the Create a Data Set screen I selected the Athena option and created a name for the data source.

screenshot2019-01-31at9.01.48pm

From there I selected the database in Athena which for me was named azuread.  The tables within the database are then populated and I chose the tbl_signin_demo which points to the test S3 bucket I mentioned previously.

Screen Shot 2019-01-31 at 9.04.22 PM.png

Due to the complexity of the data structure I opted to use a custom SQL query.  There is no reason why you couldn’t create the table I’m about to create in Athena and then connect to that table instead to make it more consumable for a wider array of users.  It’s really up to you and I honestly don’t know what the appropriate “big data” way of doing it is.  Either way, those of you with real SQL skills may want to look away from this query lest you experience a Raiders of The Lost Ark moment.

indianjones

You were warned.

SELECT records.id, records.createddatetime, records.userprincipalname, records.userDisplayName, records.userid, records.appid, records.appdisplayname, records.ipaddress, records.clientappused, records.mfadetail.authdetail AS mfadetail_authdetail, records.mfadetail.authmethod AS mfadetail_authmethod, records.correlationid, records.conditionalaccessstatus, records.appliedconditionalaccesspolicy.displayname AS cap_displayname, array_join(records.appliedconditionalaccesspolicy.enforcedgrantcontrols,' ') AS cap_enforcedgrantcontrols, array_join(records.appliedconditionalaccesspolicy.enforcedsessioncontrols,' ') AS cap_enforcedsessioncontrols, records.appliedconditionalaccesspolicy.id AS cap_id, records.appliedconditionalaccesspolicy.result AS cap_result, records.originalrequestid, records.isinteractive, records.tokenissuername, records.tokenissuertype, records.devicedetail.browser AS device_browser, records.devicedetail.deviceid AS device_id, records.devicedetail.iscompliant AS device_iscompliant, records.devicedetail.ismanaged AS device_ismanaged, records.devicedetail.operatingsystem AS device_os, records.devicedetail.trusttype AS device_trusttype,records.location.city AS location_city, records.location.countryorregion AS location_countryorregion, records.location.geocoordinates.altitude, records.location.geocoordinates.latitude, records.location.geocoordinates.longitude,records.location.state AS location_state, records.riskdetail, records.risklevelaggregated, records.risklevelduringsignin, records.riskstate, records.riskeventtypes, records.resourcedisplayname, records.resourceid, records.authenticationmethodsused, records.status.additionaldetails, records.status.errorcode, records.status.failurereason  FROM "azuread"."tbl_signin_demo" CROSS JOIN (UNNEST(value) as t(records))

This query will de-nest the data and give you a detailed (possibly extremely large depending on how much data you are storing) parsed table. I was now ready to create some data visualizations.

The first visual I made was a geospatial visual using the location data included in the logs filtered to failed logins. Not surprisingly our friends in China have shown a real interest in my and my wife’s Office 365 accounts.

screenshot2019-01-31at9.26.24pm

Next up I was interested in seeing if there were any patterns in the frequency of the failed logins.  For that I created a simple line chart showing the number of failed logins per user account in my tenant.  Interestingly enough the new year meant back to work for more than just you and me.

screenshot2019-01-31at9.28.45pm

Like I mentioned earlier Microsoft provides a ton of great detail in the sign-in logs.  Beyond just location, they also provide reasons for login failures.  I next created a stacked bar chat to show the different reasons for failed logs by user.  I found the blocked sign-ins by malicious IPs interesting.  It’s nice to know that is being tracked and taken care of.

screenshot2019-01-31at9.31.24pm

Failed logins are great, but the other thing I was interested in is successful logins and user behavior.  For this I created a vertical stacked bar chart that displayed the successful logins by user by device operating system (yet more great data captured in the logs).  You can tell from the bar on the right my wife is a fan of her Mac!

screenshot2019-01-31at9.38.02pm

As I gather more data I plan on creating some more visuals, but this was great to start.  The geo-spatial one is my favorite.  If you have access to a larger data set with a diverse set of users your data should prove fascinating.  Definitely share any graphs or interesting data points you end up putting together if you opt to do some of this analysis yourself.  I’d love some new ideas!

That will wrap up this series.  As you’ve seen the modern tool sets available to you now can do some amazing things for cheap without forcing you to maintain the infrastructure behind it.  Vendors are also doing a wonderful job providing a metric ton of data in their logs.  If you take the initiative to understand the product and the data, you can glean some powerful information that has both security and business value.  Even better, you can create some simple visuals to communicate that data to a wide variety of audiences making it that much more valuable.

Have a great weekend!

 

Capturing and Visualizing Office 365 Security Logs – Part 1

Welcome back again my fellow geeks!

I’ve been busy over the past month nerding out on some pet projects.  I thought it would be fun to share one of those pet projects with you.  If you had a chance to check out my last series, I walked through my first Python experiment which was to write a re-usable tool that could be used to pull data from Microsoft’s Graph API (Microsoft Graph).

For those of you unfamiliar with Microsoft Graph, it’s the Restful API (application programming interface) that is used to interact with Microsoft cloud offerings such as Office 365 and Azure.  You’ve probably been interacting with it without even knowing it if through the many PowerShell modules Microsoft has released to programmatically interact with those services.

One of the many resources which can be accessed through Microsoft Graph are Azure AD (Active Directory) security and audit reports.  If you’re using Office 365, Microsoft Azure, or simply Azure AD as an identity platform for SSO (single sign-on) to third-party applications like SalesForce, these reports provide critical security data.  You’re going to want to capture them, store them, and analyze them.  You’re also going to have to account for the window that Microsoft makes these logs available.

The challenge is they are not available via the means logs have traditionally been captured on-premises by using syslogd, installing an SIEM agent, or even Windows Event Log Forwarding.  Instead you’ll need to take a step forward in evolving the way you’re used to doing things. This is what moving to the cloud is all about.

Microsoft allows you to download the logs manually via the Azure Portal GUI (graphical user interface) or capture them by programmatically interacting with Microsoft Graph.  While the former option may work for ad-hoc use cases, it doesn’t scale.  Instead we’ll explore the latter method.

If you have an existing enterprise-class SIEM (Security Information and Event Management) solution such as Splunk, you’ll have an out of box integration.  However, what if you don’t have such a platform, your organization isn’t yet ready to let that platform reach out over the Internet, or you’re interested in doing this for a personal Office 365 subscription?  I fell into the last category and decided it would be an excellent use case to get some experience with Python, Microsoft Graph, and take advantage of some of the data services offered by AWS (Amazon Web Services).   This is the use case and solution I’m going to cover in this post.

Last year I had a great opportunity to dig into operational and security logs to extract useful data to address some business problems.  It was my first real opportunity to examine large amounts of data and to create different visualizations of that data to extract useful trends about user and application behavior.  I enjoyed the hell out of it and thought it would be fun to experiment with my own data.

I decided that my first use case would be Office 365 security logs.  As I covered in my last series my wife’s Office 365 account was hacked.  The damage was minor as she doesn’t use the account for much beyond some crafting sites (she’s a master crocheter as you can see from the crazy awesome Pennywise The Clown she made me for Christmas).

img_4301

The first step in the process was determining an architecture for the solution.  I gave myself a few requirements:

  1. The solution must not be dependent on my home lab infrastructure
  2. Storage for the logs must be cheap and readily available
  3. The credentials used in my Python code needs to be properly secured
  4. The solution must be automated and notify me of failures
  5. The data needs to be available in a form that it can be examined with an analytics solution

Based upon the requirements I decided to go the serverless (don’t hate me for using that tech buzzword ūüôā ) route.  My decisions were:

  • AWS Lambda would run my code
  • Amazon CloudWatch Events would be used to trigger the Lambda once a day to download the last 24 hours of logs
  • Amazon S3 (Simple Storage Service) would store the logs
  • AWS Systems Manager Parameter Store would store the parameters my code used leveraging AWS KMS (Key Management Service) to encrypt the credentials used to interact with Microsoft Graph
  • Amazon Athena would hold the schema for the logs and make the data queryable via SQL
  • Amazon QuickSight would be used to visualize the data by querying Amazon Athena

The high level architecture is pictured below.

untitled

I had never done a Lambda before so I spent a few days looking at some examples and doing the typical Hello World that we all do when we’re learning something new.  From there I took the framework of Python code I put together for general purpose queries to the Microsoft Graph, and adapted it into two Lambdas.  One Lambda would pull Sign-In logs while the other would pull Audit Logs.  I also wanted a repeatable way to provision the Lambdas to share with others and get some CloudFormation practice and brush up on my very dusty Bash scripting.   The results are located here in one of my Github repos.

I’m going to stop here for this post because we’ve covered a fair amount of material.  Hopefully after reading this post you understand that you have to take a new tact with getting logs for cloud-based services such as Azure AD.  Thankfully the cloud has brought us a whole new toolset we can use to automate the extraction and storage of those logs in a simple and secure manner.

In my next post I’ll walk through how I used Athena and QuickSight to put together some neat dashboards to satisfy my nerdy interests and get better insight into what’s happening on a daily basis with my Office 365 subscription.

See you next post and go Pats!

Using Python to Pull Data from MS Graph API – Part 2

Using Python to Pull Data from MS Graph API – Part 2

Welcome back my fellow geeks!

In this series I’m walking through my experience putting together some code to integrate with the Microsoft Graph API (Application Programming Interface).¬† In the last post I covered the logic behind this pet project and the tools I used to get it done.¬† In this post I’ll be walking through the code and covering what’s happening behind the scenes.

The project consists of three files.¬† The awsintegration.py file contains functions for the integration with AWS Systems Manager Parameter Store and Amazon S3 using the Python boto3 SDK (Software Development Kit).¬† Graphapi.py contains two functions.¬† One function uses Microsoft’s Azure Active Directory Library for Python (ADAL) and the other function uses Python’s Requests library to make calls to the MS Graph API.¬† Finally, the main.py file contains the code that brings everything together. There are a few trends you’ll notice with all of the code. First off it’s very simple since I’m a long way from being able to do any fancy tricks and the other is I tried to stay away from using too many third-party modules.

Let’s first dig into the awsintegration.py module.¬† In the first few lines above I import the required modules which include AWS’s Boto3 library.

import json
import boto3
import logging

Python has a stellar standard logging module that makes logging to a centralized location across a package a breeze.  The line below configures modules called by the main package to inherit the logging configuration from the main package.  This way I was able to direct anything I wanted to log to the same log file.

log = logging.getLogger(__name__)

This next function uses Boto3 to call AWS Systems Manager Parameter Store to retrieve a secure string.¬† Be aware that if you’re using Parameter Store to store secure strings the security principal you’re using to make the call (in my case an IAM User via Cloud9) needs to have appropriate permissions to Parameter Store and the KMS CMK.¬† Notice I added a line here to log the call for the parameter to help debug any failures.¬† Using the parameter store with Boto3 is covered in detail here.

def get_parametersParameterStore(parameterName,region):
    log.info('Request %s from Parameter Store',parameterName)
    client = boto3.client('ssm', region_name=region)
    response = client.get_parameter(
        Name=parameterName,
        WithDecryption=True
    )
    return response['Parameter']['Value']

The last function in this module again uses Boto3 to upload the file to an Amazon S3 bucket with a specific prefix.  Using S3 is covered in detail here.

def put_s3(bucket,prefix,region,filename):
    s3 = boto3.client('s3', region_name=region)
    s3.upload_file(filename,bucket,prefix + "/" + filename)

Next up is the graphapi.py module.  In the first few lines I again import the necessary modules as well as the AuthenticationContext module from ADAL.  This module contains the AuthenticationContext class which is going to get the OAuth 2.0 access token needed to authenticate to the MS Graph API.

import json
import requests
import logging
from adal import AuthenticationContext

log = logging.getLogger(__name__)

In the function below an instance of the AuthenticationContext class is created and the acquire_token_with_client_credentials method is called.¬† ¬†It uses the OAuth 2.0 Client Credentials grant type¬†which allows the script to access the MS Graph API without requiring a user context.¬† I’ve already gone ahead and provisioned and authorized the script with an identity in Azure AD and granted it the appropriate access scopes.

Behind the scenes Azure AD (authorization server in OAuth-speak) is contacted and the script (client in OAuth-speak) passes a unique client id and client secret.  The client id and client secret are used to authenticate the application to Azure AD which then looks within its directory to determine what resources the application is authorized to access (scope in OAuth-speak).  An access token is then returned from Azure AD which will be used in the next step.

def obtain_accesstoken(tenantname,clientid,clientsecret,resource):
    auth_context = AuthenticationContext('https://login.microsoftonline.com/' +
        tenantname)
    token = auth_context.acquire_token_with_client_credentials(
        resource=resource,client_id=clientid,
        client_secret=clientsecret)
    return token

A properly formatted header is created and the access token is included. The function checks to see if the q_param parameter has a value and it if does it passes it as a dictionary object to the Python Requests library which includes the key values as query strings. The request is then made to the appropriate endpoint. If the response code is anything but 200 an exception is raised, written to the log, and the script terminates.  Assuming a 200 is received the Python JSON library is used to parse the response.  The JSON content is searched for an attribute of @odata.nextLink which indicates the results have been paged.  The function handles it by looping until there are no longer any paged results.  It additionally combines the paged results into a single JSON array to make it easier to work with moving forward.

def makeapirequest(endpoint,token,q_param=None):
 
    headers = {'Content-Type':'application/json', \
    'Authorization':'Bearer {0}'.format(token['accessToken'])}

    log.info('Making request to %s...',endpoint)
        
    if q_param != None:
        response = requests.get(endpoint,headers=headers,params=q_param)
        print(response.url)
    else:
        response = requests.get(endpoint,headers=headers)    
    if response.status_code == 200:
        json_data = json.loads(response.text)
            
        if '@odata.nextLink' in json_data.keys():
            log.info('Paged result returned...')
            record = makeapirequest(json_data['@odata.nextLink'],token)
            entries = len(record['value'])
            count = 0
            while count < entries:
                json_data['value'].append(record['value'][count])
                count += 1
        return(json_data)
    else:
        raise Exception('Request failed with ',response.status_code,' - ',
            response.text)

Lastly there is¬†main.py which stitches the script together.¬† The first section adds the modules we’ve already covered in addition to the argparse library which is used to handle arguments added to the execution of the script.

import json
import requests
import logging
import time
import graphapi
import awsintegration
from argparse import ArgumentParser

A simple configuration for the logging module is setup instructing it to write to the msapiquery.log using a level of INFO and applies a standard format.

logging.basicConfig(filename='msapiquery.log', level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

This chunk of code creates an instance of the ArgumentParser class and configures two arguments.  The sourcefile argument is used to designate the JSON parameters file which contains all the necessary information.

The parameters file is then opened and processed.¬† Note that the S3 parameters are only pulled in if the –s3 switch was used.

parser = ArgumentParser()
parser.add_argument('sourcefile', type=str, help='JSON file with parameters')
parser.add_argument('--s3', help='Write results to S3 bucket',action='store_true')
args = parser.parse_args()

try:
    with open(args.sourcefile) as json_data:
        d = json.load(json_data)
        tenantname = d['parameters']['tenantname']
        resource = d['parameters']['resource']
        endpoint = d['parameters']['endpoint']
        filename = d['parameters']['filename']
        aws_region = d['parameters']['aws_region']
        q_param = d['parameters']['q_param']
        clientid_param = d['parameters']['clientid_param']
        clientsecret_param = d['parameters']['clientsecret_param']
        if args.s3:
            bucket = d['parameters']['bucket']
            prefix = d['parameters']['prefix']

Next up the get_parametersParameterStore function from the awsintegration module is executed twice.¬† Once to get the client id and once to get the client secret.¬† Note that the get_parameters method for Boto3 Systems Manager client could have been used to get both of the parameters in a single call, but I didn’t go that route.

    logging.info('Attempting to contact Parameter Store...')
    clientid = awsintegration.get_parametersParameterStore(clientid_param,aws_region)
    clientsecret = awsintegration.get_parametersParameterStore(clientsecret_param,aws_region)

In these next four lines the access token is obtained by calling the obtain_accesstoken function and the request to the MS Graph API is made using the makeapirequest function.

    logging.info('Attempting to obtain an access token...')
    token = graphapi.obtain_accesstoken(tenantname,clientid,clientsecret,resource)

    logging.info('Attempting to query %s ...',endpoint)
    data = graphapi.makeapirequest(endpoint,token,q_param)

This section creates a string representing the current day, month, and year and prepends the filename that was supplied in the parameters file.¬† The file is then opened using the with statement.¬† If you’re familiar with the¬†using¬†statement from C# the¬†with¬†statement is similar in that it ensures resources are cleaned up after being used.

Before the data is written to file, I remove the @odata.nextLink key if it’s present.¬† This is totally optional and just something I did to pretty up the results.¬† The data is then written to the file as raw text by using the Python JSON encoder/decoder.

    logging.info('Attempting to write results to a file...')
    timestr = time.strftime("%Y-%m-%d")
    filename = timestr + '-' + filename
    with open(filename,'w') as f:
        
        ## If the data was paged remove the @odata.nextLink key
        ## to clean up the data before writing it to a file

        if '@odata.nextLink' in data.keys():
            del data['@odata.nextLink']
        f.write(json.dumps(data))

Finally, if the s3 argument was passed when the script was run, the put_s3 method from the awsintegration module is run and the file is uploaded to S3.

    logging.info('Attempting to write results to %s S3 bucket...',bucket)
    if args.s3:
        awsintegration.put_s3(bucket,prefix,aws_region,filename)

Exceptions thrown anywhere in the script are captured here written to the log file.¬† I played around a lot with a few different ways of handling exceptions and everything was so interdependent that if there was a failure it was best for the script to stop altogether and inform the user.¬† Naftali Harris has an amazing blog that walks through the many different ways of handling exceptions in Python and the various advantages and disadvantages.¬† It’s a great read.

except Exception as e:
    logging.error('Exception thrown: %s',e)
    print('Error running script.  Review the log file for more details')

So that’s what the code is.¬† Let’s take a quick look at the parameters file below.¬† It’s very straight forward.¬† Keep in mind both the bucket and prefix parameters are only required when using the –s3 option.¬† Here are some details on the other options:

  • The tenantname attribute is the DNS name of the Azure AD tenant being queries.
  • The resource attribute specifies the resource the access token will be used for.¬† If you’re going to be hitting the MS Graph API, more than likely it will be¬†https://graph.microsoft.com
  • The endpoint attribute specifies the endpoint the request is being made to including any query strings you plan on using
  • The clientid_param and clientsecret_param attributes are the AWS Systems Manager Parameter Store parameter names that hold the client id and client secret the script was provisioned from Azure AD
  • The q_param attribute is an array of key value pairs intended to story OData query strings
  • The aws_region attribute is the region the S3 bucket and parameter store data is stored in
  • The filename attribute is the name you want to set for the file the script will produce
{
    "parameters":{
        "tenantname": "mytenant.com",
        "resource": "https://graph.microsoft.com",
        "endpoint": "https://graph.microsoft.com/beta/auditLogs/signIns",
        "clientid_param":"myclient_id",
        "clientsecret_param":"myclient_secret",
        "q_param":{"$filter":"createdDateTime gt 2019-01-09"},
        "aws_region":"us-east-1",
        "filename":"sign_in_logs.json",
        "bucket":"mybucket",
        "prefix":"myprefix"
    }
}

Now that the script has been covered, let’s see it action.¬† First I’m going to demonstrate how it handles paging by querying the MS Graph API endpoint to list out the users in the directory.¬† I’m going to append the $select query parameter and set it to return just the user’s id to make the output more simple and set the¬†$top¬†query parameter to one to limit the results to one user per page.¬† The endpoint looks like this https://graph.microsoft.com/beta/users?$top=1&select=id.

I’ll be running the script from an instance of Cloud9.¬† The IAM user I’m using with AWS has appropriate permissions to the S3 bucket, KMS CMK, and parameters in the parameter store.¬† I’ve set each of the parameters in the parameters file to the appropriate values for the environment I’ve configured.¬† I’ll additionally be using the –s3 option.

 

run_script.png

Once the script is complete it’s time to look at the log file that was created.¬† As seen below each step in the script to aid with debugging if something were to fail.¬† The log also indicates the results were paged.

log

The output is nicely formatted JSON that could be further transformed or fed into something like Amazon Athena for further analysis (future post maybe?).

json.png

Cool right?¬† My original use case was sign-in logs so let’s take a glance that.¬† Here I’m going to use an endpoint of https://graph.microsoft.com/beta/auditLogs/signIns with a OData filter option of createdDateTime gt 2019-01-08 which will limit the data returned to today’s sign-ins.

In the logs we see the script was successfully executed and included the filter specified.

graphapi_log_sign.png

The output is the raw JSON of the sign-ins over the past 24 hours.¬† For your entertainment purposes I’ve included one of the malicious sign-ins that was captured.¬† I SO can’t wait to examine this stuff in a BI tool.

sign_in_json

Well that’s it folks.¬† It may be ugly, but it works!¬† This was a fun activity to undertake as a first stab at making something useful in Python.¬† I especially enjoyed the lack of documentation available on this integration.¬† It really made me dive deep and learn things I probably wouldn’t have if there were a billion of examples out there.

I’ve pushed the code to Github so feel free to muck around with it to your hearts content.

AWS Managed Microsoft AD Deep Dive Part 6 – Schema Modifications

AWS Managed Microsoft AD Deep Dive  Part 6 – Schema Modifications

Yes folks, we’re at the six post for the series on AWS Managed Microsoft AD¬†(AWS Managed AD.¬† I’ve covered a lot of material over the series including an overview, how to setup the service, the directory structure, pre-configured security principals, group policies, and the delegated security model, how to configure LDAPS in the service and the implications of Amazon’s design, and just a few days ago looked at the configuration of the security of the service in regards to protocols and cipher suites.¬† As per usual, I’d highly suggest you take a read through the prior posts in the series before starting on this one.

Today I’m going to look the capabilities within the AWS Managed AD to handle Active Directory schema modifications.¬† If you’ve read my series on Microsoft’s Azure Active Directory Domain Services (AAD DS)¬†you know that the service doesn’t support the schema modifications.¬† This makes Amazon’s service the better offering in an environment where schema modifications to the standard Windows AD schema are a requirement.¬† However, like many capabilities in a managed Windows Active Directory (Windows AD) service, limitations are introduced when compared to a customer-run Windows Active Directory infrastructure.

If you’ve administered an Active Directory environment in a complex enterprise (managing users, groups, and group policies doesn’t count) you’re familiar with the butterflies that accompany the mention of a schema change.¬† Modifying the schema of Active Directory is similar to modifying the DNA of a living being.¬† Sure, you might have wonderful intentions but you may just end up causing the zombie apocalypse.¬† Modifications typically mean lots of application testing of the schema changes in a lower environment and a well documented and disaster recovery plan (you really don’t want to try to recover from a failed schema change or have to back one out).

Given the above, you can see the logic of why a service provider providing a managed Windows AD service wouldn’t want to allow schema changes.¬† However, there very legitimate business justifications for expanding the schema (outside your standard AD/Exchange/Skype upgrades) such as applications that need to store additional data about a security principal or having a business process that would be better facilitated with some additional metadata attached to an employee’s AD user account.¬† This is the market share Amazon is looking to capture.

So how does Amazon provide for this capability in a managed Windows AD forest?¬† Amazon accomplishes it through a very intelligent method of performing such a critical activity.¬† It’s accomplished by submitting an LDIF through the AWS Directory Service console.¬† That’s right folks, you (and probably more so Amazon) doesn’t have to worry about you as the customer having to hold membership in a highly privileged group such as Schema Admins or absolutely butchering a schema change by modifying something you didn’t intend to modify.

Amazon describes three steps to modifying the schema:

  1. Create the LDIF file
  2. Import the LDIF file
  3. Verify the schema extension was successful

Let’s review each of the steps.

In the first step we have to create a LDAP Data Interchange Format (LDIF)¬†file.¬† Think of the LDIF file as a set of instructions to the directory which in this could would be an add or modify to an object class or attribute.¬† I’ll be using a sample LDIF file I grabbed from an Oracle knowledge base article.¬† This schema file will add the attributes of unixUserName, unixGroupName, and unixNameIinfo to the default Active Directory schema.

To complete step one I dumped the contents below into an LDIF file and saved it as schemamod.ldif.

dn: CN=unixUserName, CN=Schema, CN=Configuration, DC=example, DC=com
changetype: add
attributeID: 1.3.6.1.4.1.42.2.27.5.1.60
attributeSyntax: 2.5.5.3
isSingleValued: TRUE
searchFlags: 1
lDAPDisplayName: unixUserName
adminDescription: This attribute contains the object's UNIX username
objectClass: attributeSchema
oMSyntax: 27

dn: CN=unixGroupName, CN=Schema, CN=Configuration, DC=example, DC=com
changetype: add
attributeID: 1.3.6.1.4.1.42.2.27.5.1.61
attributeSyntax: 2.5.5.3
isSingleValued: TRUE
searchFlags: 1
lDAPDisplayName: unixGroupName
adminDescription: This attribute contains the object's UNIX groupname
objectClass: attributeSchema
oMSyntax: 27

dn:
changetype: modify
add: schemaUpdateNow
schemaUpdateNow: 1
-

dn: CN=unixNameInfo, CN=Schema, CN=Configuration, DC=example, DC=com
changetype: add
governsID: 1.3.6.1.4.1.42.2.27.5.2.15
lDAPDisplayName: unixNameInfo
adminDescription: Auxiliary class to store UNIX name info in AD
mayContain: unixUserName
mayContain: unixGroupName
objectClass: classSchema
objectClassCategory: 3
subClassOf: top

For the step two I logged into the AWS Management Console and navigated to the Directory Service Console.  Here we can see my instance AWS Managed AD with the domain name of geekintheweeds.com.

6awsadds1.png

I then clicked hyperlink on my Directory ID which takes me into the console for the geekintheweeds.com instance.¬† Scrolling down shows a menu where a number of operations can be performed.¬† For the purposes of this blog post, we’re going to focus on the Maintenance menu item.¬† Here we the ability to leverage AWS Simple Notification Service (AWS SNS) to create notifications for directory changes such as health changes where a managed Domain Controller goes down.¬† The second section is a pretty neat feature where we can snapshot the Windows AD environment to create a point-in-time copy of the directory we can restore.¬† We’ll see this in action in a few minutes.¬† Lastly, we have the schema extensions section.

6awsadds2.png

Here I clicked the Upload and update schema button and entered selected the LDIF file and added a short description.  I then clicked the Update Schema button.

6awsadds3.png

If you know me you know I love to try to break stuff.¬† If you look closely at the LDIF contents I pasted above you’ll notice I didn’t update the file with my domain name.¬† Here the error in the LDIF has been detected and the schema modification was cancelled.

6awsadds4.png

I went through made the necessary modifications to the file and tried again.  The LDIF processes through and the console updates to show the schema change has been initialized.

6awsadds5.png

Hitting refresh on the browser window updates the status to show Creating Snapshot.  Yes folks Amazon has baked into the schema update process a snapshot of the directory provide a fallback mechanism in the event of your zombie apocalypse.  The snapshot creation process will take a while.

6awsadds6.png

While the snapshot process, let’s discuss what Amazon is doing behind the scenes to process the LDIF file.¬† We first saw that it performs some light validation on the LDIF file, it then takes a snapshot of the directory, then applies to the changes to a single domain controller by selecting one as the schema master, removing it from directory replication, and applying the LDIF file using the our favorite old school tool LDIFDE.EXE.¬† Lastly, the domain controller is added back into replication to replicate the changes to the other domain controller and complete the changes.¬† If you’ve been administering Windows AD you’ll know this has appeared recommended best practices for schema updates over the years.

Once the process is complete the console updates to show completion of the schema installation and the creation of the snapshot.

6awsadds7.png

 

My Experience Passing AWS Certified Cloud Practitioner Exam

Welcome back my fellow geeks!

Today I’m going to interrupt the series on AWS Managed Microsoft AD¬† ¬†For the past few weeks, in between writing the entries for the recent deep dive series, I’ve been preparing for the AWS Cloud Practitioner exam.¬† I thought it would be helpful to share my experience prepping for and passing the exam.

If you’re not familiar with the AWS Certificated Cloud Practitioner exam, it’s very much an introductory exam into the Amazon Web Services’ overarching architecture and products.¬† Amazon’s intended audience for the certification are your C-levels, sales people, and technical people who are new to the AWS stack and potentially cloud in general.¬† It’s very much an inch deep and mile wide.¬† For those of you who have passed your CISSP, the experience studying for it similar (although greatly scaled down content-wise) in that you need to be able to navigate the shallow end of many pools.

Some of you may be asking yourselves why I invested my time in getting an introductory certificate rather than just going for the AWS Certified Solutions Architect – Associate.¬† The reason is my personal belief that establishing a solid foundation in a technology or product is a must.¬† I’ve encountered too many IT professionals with a decade more of experience and a hundred certificates to their name who can’t explain the basics of the OSI model or the difference in process between digitally signing something versus encrypting it.¬† The sign of a stellar IT professional is one who can start at the business justification for an application and walk you right down through the stack to speak to the technology standards being leveraged within the application to deliver its value.¬† This importance in foundation is one reason I recommend every new engineer start out by taking the CompTIA A+, Network+, and Security+ exams.¬† You won’t find exams out there that better focus on foundational concepts than CompTIA exams.

The other selling point of this exam to me was the audience it’s intended for.¬† Who wouldn’t want to know the contents and messaging in an exam intended for the C-level?¬† Nothing is more effective influencing the C-level than speaking the language they’re familiar with and pushing the messaging you know they’ve been exposed to.

Let me step off this soapbox and get back to my experience with the exam.¬† ūüôā

As I mentioned above I spent about two weeks preparing for the exam.¬† My experience with the AWS stack was pretty minimal prior to that restricted to experience for my prior blogs on Azure AD and AWS integration for SSO and provisioning and Microsoft Cloud App Security integration with AWS.¬† As you can tell from the blog, I’ve done a fair amount of public cloud solutions over the past few years, just very minimally AWS.¬† The experience in other public cloud solutions such as Microsoft Azure and Google Cloud Platform (GCP) proved hugely helpful because the core offerings are leveraging similar modern concepts (i.e. all selling computer, network, and storage).¬† Additionally, the experiences I’ve had over my career with lots of different infrastructure gave me the core foundation I needed to get up and running.¬†¬†The biggest challenge for me was really learning the names of all the different offerings, their use cases, and their capabilities that set them apart from the other vendors.

For studying materials I followed most of the recommendations from Amazon which included reviews of a number of whitepapers.¬† I had started the official Amazon Cloud Practitioner Essentials course¬†(which is free by the way) but didn’t find the instructors engaging enough to keep my attention.¬† I ended up purchasing a monthly subscription to courses offered by A Cloud Guru¬†which were absolutely stellar and engaging at a very affordable monthly price (something like $29/month).¬† In addition to the courses I read each of the recommended whitepapers (ended reading a bunch of others as well) a few times each taking notes of key concepts and terminology.¬† While I was studying for this exam, I also was working on my AWS deep dive which helped to reinforce the concepts by actually building out the services for my own use.

I spent a lot of time diving into the rabbit whole of products I found really interesting (RedShift) as well as reading up on concepts I’m weaker on (big data analytics, modern nosql databases, etc).¬† That rabbit hole consisted of reading blogs, Wikipedia, and standards to better understand the technical concepts.¬† Anything I felt would be worthwhile I captured in my notes.¬† Once I had a good 15-20 pages of notes (sorry all paper this time around), I grabbed the key concepts I wanted to focus on and created flash cards.¬† I studying the deck of 200 or so flash cards each night as well as re-reading sections of the whitepapers I wanted to familiarize myself with.

For practice exams I used the practice questions Amazon provides as well as the quizzes from A Cloud Guru.¬† I found the questions on the actual exam more challenging, but the practice question and quizzes were helpful to getting into the right mindset.¬† The A Cloud Guru courses probably covered a good 85-90% of the material, but I wouldn’t recommend using it was a sole source of study, you need to read those whitepapers multiple times over.¬† You also need to do some serious hands on because some of the questions do ask you very basic questions about how you do things in the AWS Management Console.

Overall it was a well done exam.¬† I learned a bunch about the AWS product offerings, the capabilities that set AWS apart from the rest of the industry, and gained a ton of good insight into general cloud architecture and design from the whitepapers (which are really well done).¬† I’d highly recommend the exam to anyone who has anything to do with the cloud, whether you’re using AWS or not.¬† You’ll gain some great insight into cloud architecture best practices as well seeing modern technology concepts put in action.

I’ll be back with the next entry in my AWS Managed Microsoft AD series later this week.¬† Have a great week and thanks for reading!

 

 

 

 

AWS Managed Microsoft AD Deep Dive Part 4 – Configuring LDAPS

AWS Managed Microsoft AD Deep Dive  Part 4 – Configuring LDAPS

I’m back again with another entry in my deep dive into AWS Managed Microsoft Active Directory (AD).¬† So far I’ve provided an overview of the service, covered how to configure the service, and analyzed the Active Directory default configuration such as the directory structure, security principals, password policies, and group policy setup by Amazon for new instances.¬† In this post I’m going to look at the setup of LDAPS and how Amazon supports configuration of it in the delegated model they’ve setup for the service.

Those of you that have supported a Windows AD environment will be quite familiar with the wonders and sometimes pain of the Lightweight Directory Access Protocol (LDAP).¬† Prior to the modern directories such as AWS Cloud Directory, Azure Active Directory the LDAP protocol served critical roles by providing both authentication and a method of which to work with data stored in directory data stores such as Windows AD.¬† For better or worse the protocol is still relevant today when working with Windows AD for both of the above capabilities (less for authentication these days if you stay away from backwards-thinking vendors).¬† LDAP servers listen on port 389 and 636 with 389 maintaining traffic in the clear (although there are exceptions where data is encrypted in transit such as Microsoft’s usage of Kerberos encryption¬†or the use of StartTLS¬†(credit to my friend Chris Jasset for catching my omission of StartTLS)) and 636 (LDAPS) providing encryption in transit via an SSL tunnel (hopefully not anymore) or a TLS tunnel.

Windows AD maintains that pattern and serves up the content of its data store over LDAP over ports 389 and 636 and additionally ports 3268 and 3269 for global catalog queries.¬† In the assume breach days we’re living in, we as security professionals want to protect our data as it flows over the network which means we’ll more often than not (exceptions are again Kerberos encryption usage mentioned above) be using LDAPS over ports 636 or 3269.¬† To provide that secure tunnel the domain controllers will need to be setup with a digital certificate issued by a trusted certificate authority (CA).¬†¬†¬† Domain Controllers have unique requirements for the certificates they use.¬† If you’re using¬† Active Directory Certificate Services (AD CS) Microsoft takes care of providing the certificate template for you.

So how do you provision a certificate to a Domain Controller’s certificate store when you don’t have administrative privileges such as the case for a managed service like AWS Managed Active Directory?¬†¬† For Microsoft Azure Active Directory Domain Services (AAD DS) the public certificate and private key are uploaded via a web page in the Azure Portal which is a solid way of doing it.¬† Amazon went in a different and instead takes advantage of certificate autoenrollment.¬† If you’re not familiar with autoenrollment take a read through this blog.¬† In short, it’s an automated way to distribute certificates and eliminate some of the overheard of manually going through the typical certificate lifecycle which may contain manual steps.

If we bounce back to the member server in my managed domain, open the Group Policy Management Console (GPMC), and navigate to the settings tab of the AWS Managed Active Directory Policy we see that autoenrollment has been enabled on the domain controllers.  This setting explains why Amazon requires a member server joined to the managed domain be configured running AD CS.  Once the AD CS instance is setup, the CA has been configured either to as a root or subordinate CA, and a proper template is enabled for autoenrollment, the domain controllers will request the appropriate certificate and will begin using it for LDAPS.

4awsadds1.png

If you’ve ever worked with AD CS you may be asking yourself how you’ll be able to install AD CS in a domain where you aren’t a domain administrator when the Microsoft documentation¬†specifically states you need to be a member of the Enterprise Admins and root domains Domain Admins group.¬† Well folks that is where the AWS Delegated Enterprise Certificate Authority Administrators group comes into play.¬† Amazon has modified the forest to delegate the appropriate permissions to install AD CS in a domain environment.¬† If we navigate to the CN=Public Key Services, CN=Services, CN=Configuration using ADSIEdit and view the Security for the container we see this group has been granted full permissions over the node allowing the necessary objects to be populated underneath it.

4awsadds2.png

I found it interesting that in the instructions provided by Amazon for enabling LDAPS the instructions state the Domain Controller certificate template needs to modified to remove the Client Authentication EKU.¬† I’d be interested in knowing the reason for modifying the Domain Controller certificate.¬† If I had to guess it’s to prevent the domain controller from using the certificate outside of LDAPS such as for mutual authentication during smart card logon.¬† Notice that from this article domain controllers only require the Server Authentication EKU when a certificate is only used to secure LDAPS.

I’ve gone ahead and installed AD CS on SERVER01 as an Enterprise root CA and thanks to the delegation model, the CA is provisioned with all the necessary goodness in CN=Public Key Services.¬† I also created the new certificate template following the instructions from Amazon.¬† The last step is to configure the traffic flow such that the managed domain controllers can contact the CA to request a certificate.¬† The Amazon instructions actually have a typo in them.¬† On Step 4 it instructs you to modify the security group for your CA and to create a new inbound rule allowing all traffic from the source of your CA’s AWS Security group.¬† The correct security group is actually the security group automatically configured by Amazon that is associated with the managed Active Directory instance.

At this point you’ll need to wait a few hours for the managed domain controllers to detect the new certificates available for autoenrollment.¬† Mine actually only took about an hour to roll the certificates out.

4awsadds3.png

To test the service I opened LDP.EXE and established a secure session over port 636 and all worked as expected.

4awsadds4.png

Since I’m a bit OCD I also pulled the certificate using openssl to validate it’s been issued by my CA.¬† As seen in the screenshot below the certificate was issued by the geekintheweeds-CA which is the CA I setup earlier.

4awsadds5.png

Beyond the instructions Amazon provides, you’ll also want to give some thought as to how you’re going to handle revocation checks. Keep in mind that by default AD CS stores revocation information in AD. If you have applications configured to check for revocation remember to ensure those apps can communicate with the domain controllers over port 389 so design your security groups with this in mind.

Well folks that will wrap up this post. Now that LDAPS is configured, I’ll begin the tests looking at the protocols and ciphers supported when accessing LDAPS as well as examining the versions of NTLM supported and the encryption algorithms supported with Kerberos.

See you next post!

 

AWS Managed Microsoft AD Deep Dive Part 3 – Active Directory Configuration

AWS Managed Microsoft AD Deep Dive  Part 3 – Active Directory Configuration

Welcome back to my series on AWS Managed Microsoft Active Directory (AD).¬† In my first post I provided an overview of the service.¬† In the second post I covered the setup of an AWS Managed Microsoft AD directory instance and demoed the seamless domain-join of a Windows EC2 instance.¬† I’d recommend you reference back to those posts before you jump into this.¬† I’ll also be referencing my series on Azure Active Directory Domain Serices (AAD DS), so you may want to take a read through that as well with the understanding it was written in February of 2018 and there may be new features in public preview.

For this post I’m going to cover the directory structure, security principals, group policy objects, and permissions which are created when a new instance of the managed service is spun up.¬† I’ll be using a combination of PowerShell and the Active Directory-related tools from the Remote Server Administrator Tools.¬† For those of you who like to dig into the weeds, this will be a post for you.¬† Shall we dive in?

Let’s start with the basics.¬† Opening the Active Directory Users and Computers (ADUC) Microsoft Management console (MMC) as the “Admin” account I created during setup displays the directory structure.¬† Notice that there are three organizational units (OU) that have been created by Amazon.

2awsadds1

The AWS Delegated Groups OU contains the groups Amazon has configured that have been granted delegated rights to perform typical administrative activities in the directory.¬† A number of the activities would have required Domain Admin or Enterprise Admin by default which obviously isn’t an option within a managed service where you want to limit the customer from blowing up AD.¬† Notice that Amazon has properly scoped each of the groups to allow for potential management from another trusted domain.

2awsadds2.png

The group names speak for themselves but there are a few I want to call out.  The AWS Delegated Administrators group is the most privileged customer group within the service and has been nested into all of the groups except for the AWS Delegated Add Workstations To Domain Users, which makes sense since the AWS Delegated Administrators group has full control over the customer OU as we will see soon.

2awsadds3.png

The AWS Delegated Kerberos Delegation Administrators group allows members to configure account-based Kerberos delegation.¬† Yes, yes, I know Resource-Based Constrained Delegation is far superior but there may be use cases where the Kerberos library being used doesn’t support it.¬† Take note that Azure Active Directory Domain Services (AAD DS) doesn’t support account-based Kerberos Constrained Delegation.¬† This a win for Amazon in regards to flexibility.

Another group which popped out to me was AWS Delegated Sites and Services.  Members of this group allow you to rename the Default-First-Site.  You would do this if you wanted it to match a site within your existing on-premises Windows AD to shave off a few seconds of user authentication by skipping the site discovery process.

The AWS Delegated System Management Administrators grants members full control over the domainFQDN\System\System Management container.  Creation of data in the container is a requirement for applications like Microsoft SCOM and Microsoft SCCM.

There is also the AWS Delegated User Principal Name Suffix Administrators which grants members the ability to create explicit user principal names.¬† This could pop up as a requirement for something like synchronize to Office 365 where your domain DNS name isn’t publicly routable and you have to go the alternate UPN direction.¬† Additionally we have the AWS Delegated Enterprise Certificate Authority Administrators which allows for deployment of a Microsoft CA via Active Directory Certificate Services (AD CS) by granting full control over CN=Public Key Services is the Configuration partition.¬† We’ll see why AD CS is important for AWS Managed Microsoft AD later in the series.¬† I like the AWS Delegated Deleted Object Lifetime Administrators which grants members the ability to set the lifetime for objects in the Active Directory Recycle Bin.

The next OU we have is the AWS Reserved OU.¬† As you can imagine, this is Amazon’s support staff’s OU.¬† Within it we have the built-in Administrator.¬† Unfortunately Amazon made the same mistake Microsoft did with this account by making it a god account with membership in Domain Admins, Enterprise Admins, and Schema Admins. With the amount of orchestration going into the solution I’d have liked to see those roles either broken up into multiple accounts or no account holding standing membership into such privileged roles via privileged access management system or red forest design.¬†¬† The AWS Application and Service Delegated Group has a cryptic description (along with typos). I poked around the permissions and see it has write to the ServicePrincipalName attribute of Computer objects within the OU.¬† Maybe this comes into play with WorkDocs or WorkMail integration?¬† Lastly we have the AWSAdministrators which has been granted membership into the AWS Delegated Administrators group granting it all the privileges the customer administrator account has.¬† Best guess is this group is used by Amazon for supporting the customer’s environment.

2awsadds4

The last OU we’ll look at is the customer OU which takes on the NetBIOS name of the domain.¬† This is where the model for this service is similar to the model for AAD DS in that the customer has full control over an OU(s).¬† There are two OUs created within the OU named Computers and Users.¬† Amazon has setup the Computers OU and the User OU as the default locations¬†for newly domain-joined computer objects and new user objects.¬†¬† The only object that is pre-created in these OUs is the customer Admin account which is stored in the Users OU.¬† Under this OU you are free to do whatever needs doing.¬† It’s a similar approach as Microsoft took with AAD DS but contained one OU deep vs allowing for creation of new OUs at the base like in AAD DS.

2awsadds5.png

Quickly looking at Sites and Subnets displays a single default site (which can be renamed as I mentioned earlier).  Amazon has defined the entirety of the available private IP address space to account for any network whether it be on-prem or within a VPC.

2awsadds6.png

As for the domain controllers, both domain controllers are running Windows Server 2012 at the forest and domain functional levels of 2012 R2.

2awsadds7.png

Shifting over group policy, Amazon has made some interesting choices and has taken extra steps to further secure the managed domain controllers.¬† As you can see from the screenshot below, there four custom group policy objects (GPOs) created by default by Amazon.¬† Before we get into them, let’s first cover the Default Domain Policy (DDP) and Default Domain Controllers Policy (DDCP) GPO.¬† If you look at the image below, you’ll notice that while the DDCP exists it isn’t linked to the Domain Controllers OU.¬† This is an interesting choice, and not one that I would have made, but I’d be very curious as to their reasoning for their decision to remove the link.¬† Best practice would have been to leave it linked but create additional GPOs would override the settings in it with your more/less restrictive settings.¬† The additional GPOs would be set with a lower link order which would give them precedence over the DDCP.¬† At least they’re not modifying default GPOs, so that’s something. ūüôā

Next up we have the DDP which is straight out of the box minus one change to the setting Network Security: Force logoff when logon hours expire.  By default this setting disabled and Amazon has enabled it to improve security.

The ServerAdmins GPO at the top of the domain has one setting enabled which adds the AWS Delegated Server Administrators to the BUILTIN\Administrators group on all domain-joined machine.  This setting is worth paying attention because it explains the blue icon next to the AWS Reserved OU.  Inheritance has been blocked on that OU probably to avoid the settings in the ServerAdmin GPO being applied to any Computer objects created within it.  The Default Domain Policy has then been directly linked to the OU.

Next up we have the other GPO linked to the AWS Reserved OU named AWS Reserved Policy:User.  The policy itself has a few User-related settings intended to harden the experience for AWS support staff including turning on screensavers and password protecting them and preventing sharing of files within profiles.  Nothing too crazy.

Moving on to the Domain Controllers OU we see that the two policies linked are AWS Managed Active Directory Policy and TimePolicyPDC.¬† The TimePolicyPDC GPO simply settings the authoritative the NTP settings on the domain controllers such as configuring the DCs to use Amazon NTP servers.¬† The AWS Managed Active Directory Policy is an interesting one.¬† It contains all of the policies you’d expect out of the Default Domain Controllers Policy GPO (which makes sense since it isn’t linked) in addition to a bunch of settings hardening the system.¬† I compared many of the settings to the STIG for Windows Server 2012 / 2012 R2 Domain Controllers and it very closely matches.¬† If I had to guess that is what Amazon is working with on a baseline which might make sense since Amazon touts the managed service as PCI and HIPAA compliant with a few minor changes on the customer end for password length/history and account lockout.¬† We’ll cover how those changes are possible in a few minutes.

Compare this to Microsoft’s AAD DS which is straight up defaults with no ability to further harden.¬† Now I have no idea if that’s in the roadmap for Microsoft or they’re hardening the system in another manner, but I imagine seeing those GPOs that are enforcing required settings will make your auditors that much happier.¬† Another +1 for Amazon.

2awsadds8.png

So how would a customer harden a password policy or configure account lockout?¬† If you recall from my blog on AAD DS, the password policy was a nightmare.¬† There was a zero character required password length making complexity dictate your length (3 characters woohoo!).¬† If you’re like me the thought of administrators having the ability to set three character passwords on a service that can be exposed to the Internet via their LDAPS Internet endpoint (Did I mention that is a terrible design choice) you have a recipe for disaster.¬† There was also no way to setup fine grained password policies to correct this poor design decision.

Amazon has taken a much saner and more security sensitive approach.  Recall from earlier there was a group named AWS Delegated Fine Grained Password Policy Administrators.  Yes folks, in addition to Amazon keeping the Default Domain Policy the out of the box defaults (better than nothing), Amazon also gives you the ability to customize five pre-configured fine grained password policies.  With these policies you can set the password and lockout settings that are required by your organization.  A big +1 for Amazon here.

2awsadds9

That wraps up this post.  As you can see from this entry Amazon has done a stellar job building security into this managed service, allowing some flexibility for the customer to further harden the systems, all the while still being successful in delegating commonly performed administrative activities back to the customer.

In my next post I’ll walk through the process to configure LDAPS for the service.¬† It’s a bit funky, so it’s worth an entry.¬† Once LDAPS is up and running, I’ll follow it up by examining the protocols and cipher suites supported by the managed service.

See you next post!