Azure Key Vault, Certificates, and Python, oh my!

Azure Key Vault, Certificates, and Python, oh my!

Welcome back folks! I recently had a few customers ask me about using certificates with Azure Key Vault and switching from using a client secret to a client certificate for their Azure AD (Active Directory) service principals. The questions put me on a path of diving deeper around the topics which results in some great learning and opportunity to create some Python code samples.

Azure Key Vault is Microsoft’s solution for secure secret, key, and credential management. If you’re coming from the AWS (Amazon Web Services) realm, you can think of it as AWS KMS (Key Management Services) with a little bit of AWS Secrets Manager and AWS Certificate Manager thrown in there. The use cases for secrets and keys are fairly well known and straightforward, so I’m going instead focus time on the certificates use case.

In a world where passwordless is the newest buzzword, there is an increasing usage of secrets (or passwords) in the non-human world. These secrets are often used to programmatically interact with APIs. In the Microsoft world you have your service principals and client secrets, in the AWS world you have your IAM Users with secret access keys, and many more third-parties out there require similar patterns that require the use of an access key. Vendors like Microsoft and AWS have worked to mitigate this growing problem in the scope of their APIs by introducing features such as Azure Managed Identities and AWS IAM Roles which use short lived dynamic secrets. However, both of these solutions work only if your workload is running within the relevant public cloud and the service it’s running within supports the feature. What about third-party APIs, multi-cloud workloads, or on-premises workloads? In those instances you’re many of times forced to fall back to the secret keys.

There is a better option to secret keys, and that is client certificates. While a secret falls into the “something you know” category, client certificates fall into the “something you have” category. They provide an higher assurance of identity (assuming you exercise good key management practices) and can have more flexibility in their secure storage and usage. Azure Service Principals support certificate-based authentication in addition to client secrets and Azure Key Vault supports the secure storage certificates. Used in combination, it can yield some pretty cool patterns.

Before I get into those patterns, I want to cover some of the basics in how Azure Key Vault stores certificates. There are some nuances to how it’s designed that is incredibly useful to understand. I’m not going to provide a deep dive on the inner workings of Key Vault, the public documentation does a decent enough job of that, but I am going to cover some of the basics which will help get you up and running.

Certificates can be both imported into and generated within Azure Key Vault. These certificates generated can be self-signed, generated from a selection of public CAs (certificate authorities) it is integrated with, or can be used to generate a CSR (certificate signing request) you can full-fill with your own CA. These processes are well detailed in the documentation, so I won’t be touching further on them.

Once you’ve imported or generated a certificate and private key into Key Vault, we get into the interesting stuff. The components of the certificate and private key are exposed in different ways through different interfaces as seen below.

Key Vault and certificates

Metadata about the certificate and the certificate itself are accessible via the certificates interface. This information includes the certificate itself provided in DER (distinguished encoded rules) format, properties of the certificate such as the expiration date, and metadata about the private key. You’ll use this interface to get a copy of the certificate (minus private key) or pull specific properties of the certificate such as the thumbprint.

Operations using the private key such as sign, verify, encrypt, and decrypt, are made available through the key interface. Say you want to sign a JWT (JSON Web Token) to authenticate to an API, you would use this interface.

Lastly, the private key is available through the secret interface. This is where you could retrieve the private key in PEM (privacy enhanced mail) or PKCS#12 (public key cryptography standards) format if you’ve set the private key to be exportable. Maybe you’re using a library like MSAL (Microsoft Authentication Library) which requires the private key as an input when obtaining an OAuth access token using a confidential client.

Now that you understand those basics, let’s look at some patterns that you could leverage.

In the first pattern consider that you have a CI/CD (continuous integration / continuous delivery) running on-premises that you wish to use to provision resources in Azure. You have a strict requirement from your security team that the infrastructure remain on-premises. In this scenario you could provision a service principal that is configured for certificate authentication and use the MSAL libraries to authenticate to Azure AD to obtain the access tokens needed to access the ARM API (Azure Resource Manager). Here is Python sample code demonstrating this pattern.

On-premises certificate authentication with MSAL

In the next pattern let’s consider you have a workload running in the Azure AD tenant you dedicate to internal enterprise workloads. You have a separate Azure AD tenant used for customer workloads. Within an Azure subscription associated with the customer tenant, there is an instance of Azure Event Hub you need to access from a workload running in the enterprise tenant. For this scenario you could use a pattern where the workload running in the enterprise tenant uses an Azure Managed Identity to retrieve a client certificate and private key from Key Vault to use with the MSAL library to obtain an access token for a service principal in the customer tenant which it will use to access the Event Hub.

Here is some sample Python code that could be used to demonstrate this pattern.

For the last pattern, let’s consider you have the same use case as above, but you are using the Premium SKU of Azure Key Vault because you have a regulatory requirement that the private key never leaves the HSM (hardware security module) and all cryptographic operations are performed on the HSM. This takes MSAL out of the picture because MSAL requires the private key be provided as a variable when using a client certificate for authentication of the OAuth client. In this scenario you can use the key interface of Key Vault to sign the JWT used to obtain the access token from Azure AD. This same pattern could be leveraged for other third-party APIs that support certificate-based authentication.

Here is a Python code sample of this pattern.

Well folks I’m going to keep it short and sweet. Hopefully this brief blog post has helped to show you the value of Key Vault and provide some options to you for moving away from secret-based credentials for your non-human access to APIs. Additionally, I really hope you get some value out of the Python code samples. I know there is a fairly significant gap in Python sample code for these types of operations, so hopefully this begins filling it.

Thanks!

Visualizing AWS Logging Data in Azure Monitor – Part 2

Visualizing AWS Logging Data in Azure Monitor – Part 2

Welcome back folks!

In this post I’ll be continuing my series on how Azure Monitor can be used to visualize log data generated by other cloud services.  In my last post I covered the challenges that multicloud brings and what Azure can do to help with it.  I also gave an overview of Azure Monitor and covered the design of the demo I put together and will be walking through in this post.  Please take a read through that post if you haven’t already.  If you want to follow along, I’ve put the solution up on Github.

Let’s quickly review the design of the solution.

Capture

This solution uses some simple Python code to pull information about the usage of AWS IAM User access id and secret keys from an AWS account.  The code runs via a Lambda and stores the Azure Log Analytics Workspace id and key in environment variables of the Lambda that are encrypted with an AWS KMS key.  The data is pulled from the AWS API using the Boto3 SDK and is transformed to JSON format.  It’s then delivered to the HTTP Data Collector API which places it into the Log Analytics Workspace.  From there, it becomes available to Azure Monitor to query and visualize.

Setting up an Azure environment for this integration is very simple.  You’ll need an active Azure subscription.  If you don’t have one, you can setup a free Azure account to play around.  Once you’re set with the Azure subscription, you’ll need to create an Azure Log Analytics Workspace.  Instructions for that can be found in this Microsoft article.  After the workspace has been setup, you’ll need to get the workspace id and key as referenced in the Obtain workspace ID and key section of this Microsoft article.  You’ll use this workspace ID and key to authenticate to the HTTP Data Collector API.

If you have a sandbox AWS account and would like to follow along, I’ve included a CloudFormation template that will setup the AWS environment.  You’ll need to have an AWS account with sufficient permissions to run the template and provision the resources.  Prior to running the template, you will need to zip up the lambda_function.py and put it on an AWS S3 bucket you have permissions on.  When you run the template you’ll be prompted to provide the S3 bucket name, the name of the ZIP file, the Log Analytics Workspace ID and key, and the name you want the API to assign to the log in the workspace.

The Python code backing the solution is pretty simple.  It uses all standard Python modules except for the boto3 module used to interact with AWS.

import json
import logging
import re
import csv
import boto3
import os
import hmac
import base64
import hashlib
import datetime

from io import StringIO
from datetime import datetime
from botocore.vendored import requests

The first function in the code parses the ARN (Amazon Resource Name) to extract the AWS account number.  This information is later included in the log data written to Azure.

# Parse the IAM User ARN to extract the AWS account number
def parse_arn(arn_string):
    acct_num = re.findall(r'(?<=:)[0-9]{12}',arn_string)
    return acct_num[0]

The second function uses the strftime method to transform the timestamp returned from the AWS API to a format that the Azure Monitor API will detect as a timestamp and make that particular field for each record in the Log Analytics Workspace a datetime type.

# Convert timestamp to one more compatible with Azure Monitor
def transform_datetime(awsdatetime):
transf_time = awsdatetime.strftime("%Y-%m-%dT%H:%M:%S")
return transf_time

The next function queries the AWS API for a listing of AWS IAM Users setup in the account and creates dictionary object representing data about that user. That object is added to a list which holds each object representing each user.

# Query for a list of AWS IAM Users
def query_iam_users():
    
    todaydate = (datetime.now()).strftime("%Y-%m-%d")
    users = []
    client = boto3.client(
        'iam'
    )

    paginator = client.get_paginator('list_users')
    response_iterator = paginator.paginate()
    for page in response_iterator:
        for user in page['Users']:
            user_rec = {'loggedDate':todaydate,'username':user['UserName'],'account_number':(parse_arn(user['Arn']))}
            users.append(user_rec)
    return users

The query_access_keys function queries the AWS API for a listing of the access keys that have been provisioned the AWS IAM User as well as the status of those keys and some metrics around the usage.  The resulting data is then added to a dictionary object and the object added to a list.  Each item in the list represents a record for an AWS access id.

# Query for a list of access keys and information on access keys for an AWS IAM User
def query_access_keys(user):
    keys = []
    client = boto3.client(
        'iam'
    )
    paginator = client.get_paginator('list_access_keys')
    response_iterator = paginator.paginate(
        UserName = user['username']
    )

    # Get information on access key usage
    for page in response_iterator:
        for key in page['AccessKeyMetadata']:
            response = client.get_access_key_last_used(
                AccessKeyId = key['AccessKeyId']
            )
            # Santize key before sending it along for export

            sanitizedacctkey = key['AccessKeyId'][:4] + '...' + key['AccessKeyId'][-4:]
            # Create new dictonionary object with access key information
            if 'LastUsedDate' in response.get('AccessKeyLastUsed'):

                key_rec = {'loggedDate':user['loggedDate'],'user':user['username'],'account_number':user['account_number'],
                'AccessKeyId':sanitizedacctkey,'CreateDate':(transform_datetime(key['CreateDate'])),
                'LastUsedDate':(transform_datetime(response['AccessKeyLastUsed']['LastUsedDate'])),
                'Region':response['AccessKeyLastUsed']['Region'],'Status':key['Status'],
                'ServiceName':response['AccessKeyLastUsed']['ServiceName']}
                keys.append(key_rec)
            else:
                key_rec = {'loggedDate':user['loggedDate'],'user':user['username'],'account_number':user['account_number'],
                'AccessKeyId':sanitizedacctkey,'CreateDate':(transform_datetime(key['CreateDate'])),'Status':key['Status']}
                keys.append(key_rec)
    return keys

The next two functions contain the code that creates and submits the request to the Azure Monitor API.  The product team was awesome enough to provide some sample code in the in the public documentation for this part.  The code is intended for Python 2 but only required a few small changes to make it compatible with Python 3.

Let’s first talk about the build_signature function.  At this time the API uses HTTP request signing using the Log Analytics Workspace id and key to authenticate to the API.  In short this means you’ll have two sets of shared keys per workspace, so consider the workspace your authorization boundary and prioritize proper key management (aka use a different workspace for each workload, track key usage, and rotate keys as your internal policies require).

Breaking down the code below, we the string that will act as the header includes the HTTP method, length of request content, a custom header of x-ms-date, and the REST resource endpoint.  The string is then converted to a bytes object, and an HMAC is created using SHA256 which is then base-64 encoded.  The result is the authorization header which is returned by the function.

def build_signature(customer_id, shared_key, date, content_length, method, content_type, resource):
    x_headers = 'x-ms-date:' + date
    string_to_hash = method + "\n" + str(content_length) + "\n" + content_type + "\n" + x_headers + "\n" + resource
    bytes_to_hash = bytes(string_to_hash, encoding="utf-8")  
    decoded_key = base64.b64decode(shared_key)
    encoded_hash = base64.b64encode(
        hmac.new(decoded_key, bytes_to_hash, digestmod=hashlib.sha256).digest()).decode()
    authorization = "SharedKey {}:{}".format(customer_id,encoded_hash)
    return authorization

Not much needs to be said about the post_data function beyond that it uses the Python requests module to post the log content to the API.  Take note of the limits around the data that can be included in the body of the request.  Key takeaways here is if you plan pushing a lot of data to the API you’ll need to chunk your data to fit within the limits.

def post_data(customer_id, shared_key, body, log_type):
    method = 'POST'
    content_type = 'application/json'
    resource = '/api/logs'
    rfc1123date = datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
    content_length = len(body)
    signature = build_signature(customer_id, shared_key, rfc1123date, content_length, method, content_type, resource)
    uri = 'https://' + customer_id + '.ods.opinsights.azure.com' + resource + '?api-version=2016-04-01'

    headers = {
        'content-type': content_type,
        'Authorization': signature,
        'Log-Type': log_type,
        'x-ms-date': rfc1123date
    }

    response = requests.post(uri,data=body, headers=headers)
    if (response.status_code >= 200 and response.status_code <= 299):
        print("Accepted")
    else:
        print("Response code: {}".format(response.status_code))

Last but not least we have the lambda_handler function which brings everything together. It first gets a listing of users, loops through each user to information about the access id and secret keys usage, creates a log record containing information about each key, converts the data from a dict to a JSON string, and writes it to the API. If the content is successfully delivered, the log for the Lambda will note that it was accepted.

def lambda_handler(event, context):

    # Enable logging to console
    logging.basicConfig(level=logging.INFO,format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')

    try:

        # Initialize empty records array
        #
        key_records = []
        
        # Retrieve list of IAM Users
        logging.info("Retrieving a list of IAM Users...")
        users = query_iam_users()

        # Retrieve list of access keys for each IAM User and add to record
        logging.info("Retrieving a listing of access keys for each IAM User...")
        for user in users:
            key_records.extend(query_access_keys(user))
        # Prepare data for sending to Azure Monitor HTTP Data Collector API
        body = json.dumps(key_records)
        post_data(os.environ['WorkspaceId'], os.environ['WorkspaceKey'], body, os.environ['LogName'])

    except Exception as e:
        logging.error("Execution error",exc_info=True)

Once the data is delivered, it will take a few minutes for it to be processed and appear in the Log Analytics Workspace. In my tests it only took around 2-5 minutes, but I wasn’t writing much data to the API.  After the data processes you’ll see a new entry under the listing of Custom Logs in the Log Analytics Workspace.  The entry will be the log name you picked and with a _CL at the end.  Expanding the entry will display the columns that were created based upon the log entry.  Note that the columns consumed from the data you passed will end with an underscore and a character denoting the data type.

mylog

Now that the data is in the workspace, I can start querying it and creating some visualizations.  Azure Monitor uses the Kusto Query Language (KQL).  If you’ve ever created queries in Splunk, the language will feel familiar.

The log I created in AWS and pushed to the API has the following schema.  Note the addition of the underscore followed by a character denoting the column data type.

  • logged_Date (string) – The date the Lambda ran
  • user_s (string) – The AWS IAM User the key belongs to
  • account_number_s (string) – The AWS Account number the IAM Users belong to
  • AccessKeyId (string) – The id of the access key associated with the user which has been sanitized to show just the first 4 and last 4 characters
  • CreateDate_t (timestamp) – The date and time when the access key was created
  • LastUsedDate_t (timestamp) – The date and time the key was last used
  • Region_s (string) – The region where the access key was last used
  • Status_s (string) – Whether the key is enabled or disabled
  • ServiceName_s (string) – The AWS service where the access key was last used

In addition to what I’ve pushed, Azure Monitor adds a TimeGenerated field to each record which is the time the log entry was sent to Azure Monitor.  You can override this behavior and provide a field for Azure Monitor to use for this if you like (see here).  There are some other miscellaneous fields are inherited from whatever schema the API is drawing from.  These are fields such as TenantId and SourceSystem, which in this case is populated with RestAPI.

Since my personal AWS environment is quite small and the AWS IAM Users usage are very limited, my data sets aren’t huge.  To address this I created a number of IAM Users with access keys for the purpose blog.  I’m getting that out of the way so my AWS friends don’t hate on me. 🙂

One of core best practices in key management with shared keys is to ensure you rotate them.  The first data point I wanted to extract was which keys that existed in my AWS account were over 90 days old.  To do that I put together the following query:

AWS_Access_Key_Report_CL
| extend key_age = datetime_diff('day',now(),CreateDate_t)
| project Age=key_age,AccessKey=AccessKeyId_s, User=user_s
| where Age > 90
| sort by Age

Let’s walk through the query.  The first line tells the query engine to run this query against the AWS_Access_Key_Report_CL.  The next line creates a new field that contains the age of the key by determining the amount of time that has passed between the creation date of the key and today’s date.  The line after that instructs the engine to pull back only the key_age field I just created and the AccessKeyId_s, user_s , and status_s fields.  The results are then further culled down to pull only records where the key age is greater than 90 days and finally the results are sorted by the age of the key.

query1

Looks like it’s time to rotate that access key in use by Azure AD. 🙂

I can then pin this query to a new shared dashboard for other users to consume.  Cool and easy right?  How about we create something visual?

Looking at the trends in access key creation can provide some valuable insights into what is the norm and what is not.  Let’s take a look a the metrics for key creation (of the keys still exist in an enabled/disabled state).  For that I’m going to use the following query:

AWS_Access_Key_Report_CL
| make-series AccessKeys=count() default=0 on CreateDate_t from datetime(2019-01-01) to datetime(2020-01-01) step 1d

In this query I’m using the make-series operator to count the number of access keys created each day and assigning a default value of 0 if there are no keys created on that date.  The result of the query isn’t very useful when looking at it in tabular form.

query2.PNG

By selecting the Line drop down box, I can transform the date into a line grab which shows me spikes of creation in log creation.  If this was real data, investigation into the spike of key creations on 6/30 may be warranted.

quer2_2.PNG

I put together a few other visuals and tables and created a custom dashboard like the below.  Creating the dashboard took about an hour so, with much of the time invested in figuring out the query language.

dashboard

What you’ve seen here is a demonstration of the power and simplicity of Azure Monitor.  By adding a simple to use API, Microsoft has exponentially increased the agility of the tool by allowing it to become a single pane of glass for monitoring across clouds.   It’s also worth noting that Microsoft’s BI (business intelligence) tool Power BI has direct integration with Azure Log Analytics.  This allows you to pull that log data into PowerBI and perform more in-depth analysis and to create even richer visualizations.

Well folks, I hope you’ve found this series of value.  I really enjoyed creating it and already have a few additional use cases in mind.  Make sure to follow me on Github as I’ll be posting all of the code and solutions I put together there for your general consumption.

Have a great day!

 

 

Visualizing AWS Logging Data in Azure Monitor – Part 1

Visualizing AWS Logging Data in Azure Monitor – Part 1

Hi folks!

2019 is more than halfway over and it feels like it has happened in a flash.  It’s been an awesome year with tons of change and even more learning.  I started the year neck deep in AWS and began transitioning into Azure back in April when I joined on with Microsoft.  Having the opportunity to explore both clouds and learn the capabilities of each offering has been an amazing experience that I’m incredibly thankful for.  As I’ve tried to do for the past 8 years, I’m going to share some of those learning with you.  Today we’re going to explore one of the capabilities that differentiates Azure from its competition.

One of the key takeaways I’ve had from my experiences with AWS and Microsoft is enterprises have become multicloud.  Workloads are quickly being spread out among public and private clouds.  While the business benefits greatly from a multicloud approach where workloads can go to the most appropriate environment where the cost, risks, and time tables best suit it, it presents a major challenge to the technical orchestration behind the scenes.  With different APIs (application programmatic interface), varying levels of compliance, great and not so great capabilities around monitoring and alerting, and a major industry gap in multicloud skills sets, it can become quite a headache to successfully execute this approach.

One area Microsoft Azure differentiates itself is its ability to easy the challenge of monitoring and alerting in a multicloud environment.  Azure Monitor is one of the key products behind this capability.  With this post I’m going to demonstrate Azure Monitor’s capabilities in this realm by walking you through a pattern of delivering, visualizing, and analyzing log data collected from AWS.  The pattern I’ll be demonstrating is reusable for most any cloud (and potentially on-premises) offering.  Now sit back, put your geek hat on, and let’s dive in.

First I want to briefly talk about what Azure Monitor is?  Azure Monitor is a solution which brings together a collection of tools that can be used to collect and analyze the large abundance of telemetry available today.  This telemetry could be metrics in regards to a virtual machine’s performance or audit logs for Azure Active Directory.  The product team has put together the excellent diagram below which explains the architecture of the solution.

As you can see from the inputs on the left, Azure Monitor is capable of collecting and analyzing data from a variety of sources.  You’ll find plenty of documentation the product team has made publicly available on the five gray items, so I’m going to instead focus on custom sources.

For those of you who have been playing in the AWS pool, you can think of Azure Monitor as something similar (but much more robust) to CloudWatch Metrics and CloudWatch Logs.  I know, I know, you’re thinking I’ve drank the Microsft Kool-Aid.

koolaid.jpg

While I do love to reminisce about cold glasses of Kool-Aid on hot summers in the 1980s, I’ll opt to instead demonstrate it in action and let you decide for yourself.  To do this I’ll be leveraging the new API Microsoft introduced.  The Azure Monitor HTTP Data Collector API was introduced a few months back and provides the capability of delivering log data to Azure where it can be analyzed by Azure Monitor.

With Azure Monitor logs are stored in an Azure resource called a Log Analytics Workspace.  For you AWS folk, you can think of a Log Analytics Workspace as something similar to CloudWatch Log Groups where the data stored in a logical boundary where the data shares a retention and authorization boundary.  Logs are sent to the API in JSON format and are placed in the Log Analytics Workspace you specify.  A high level diagram of the flow can be seen below.

So now that you have a high level understanding of what Azure Monitor is, what it can do, and how the new API works, let’s talk about the demonstration.

If you’ve used AWS you’re very familiar with the capabilities CloudWatch Metrics Dashboards and the basic query language available to analyze CloudWatch Logs.  To perform more complex queries and to create deeper visualizations, third-party solutions are often used such as ElasticSearch and Kibana.  While these solutions work, they can be complex to implement and can create more operational overhead.

When a peer informed me about the new API a few weeks back, I was excited to try it out.  I had just started to use Azure Monitor to put together some dashboards for my personal Office 365 and Azure subscriptions and was loving the power and simplicity of the analytics component of the solution.  The new API opened up some neat opportunities to pipe logging data from AWS into Azure to create a single dashboard I could reference for both clouds.  This became my use case and demonstration of the pattern of delivering logs from a third party to Azure Monitor with some simple Python code.

The logs I chose to deliver to the API were logs containing information surrounding the usage of AWS access ids and keys.  I had previously put together some code to pull this data and write it to an S3 bucket.

Let’s take a look at the design of the solution.  I had a few goals I wanted to make sure to hit if possible.  My first goal was to keep the code simple.  That mean limiting the usage of third-party modules and avoid over complicating the implementation.

My second goal was to limit the usage of static credentials.  If I ran the code in Azure, I’d need to setup an AWS IAM User and provision an access id and secret key.  While I’m aware of the workaround to use SAML authentication, I’m not a fan because in my personal opinion, it’s using SAML in such a way you are trying to hammer in a square peg in a round hole.  Sure you can do it, but you really shouldn’t unless you’re out of options.  Additionally, the solution requires some fairly sensitive permissions in AWS such as IAM:ListAccessKeys so the risk of the credentials being compromised could be significant.  Given the risks and constraints of authentication methods to the AWS API, I opted to run my code as a Lambda and follow AWS best practices and assign the Lambda an IAM role.

On the Azure side, the Azure Monitor API for log delivery requires authentication using the Workspace ID and Workspace key.   Ideally these would be encrypted and stored in AWS Secrets Manager or as a secure parameter in Parameter Store, but I decided to go the easy route and store them as environment variables for the Lambda and to encrypt them with AWS KMS.  This cut back on the code and made the CloudFormation templates easier to put together.

With the decisions made the resulting design is pictured above.

Capture.PNG

I’m going to end the post here and save the dive into implementation and code for the next post.  In the meantime, take a read through the Azure Monitor documentation and familiarize yourself with the basics.  I’ve also put the whole solution up on Github if you’d like to follow along for next post.

See you next post!

 

Using Python to Pull Data from MS Graph API – Part 2

Using Python to Pull Data from MS Graph API – Part 2

Welcome back my fellow geeks!

In this series I’m walking through my experience putting together some code to integrate with the Microsoft Graph API (Application Programming Interface).  In the last post I covered the logic behind this pet project and the tools I used to get it done.  In this post I’ll be walking through the code and covering what’s happening behind the scenes.

The project consists of three files.  The awsintegration.py file contains functions for the integration with AWS Systems Manager Parameter Store and Amazon S3 using the Python boto3 SDK (Software Development Kit).  Graphapi.py contains two functions.  One function uses Microsoft’s Azure Active Directory Library for Python (ADAL) and the other function uses Python’s Requests library to make calls to the MS Graph API.  Finally, the main.py file contains the code that brings everything together. There are a few trends you’ll notice with all of the code. First off it’s very simple since I’m a long way from being able to do any fancy tricks and the other is I tried to stay away from using too many third-party modules.

Let’s first dig into the awsintegration.py module.  In the first few lines above I import the required modules which include AWS’s Boto3 library.

import json
import boto3
import logging

Python has a stellar standard logging module that makes logging to a centralized location across a package a breeze.  The line below configures modules called by the main package to inherit the logging configuration from the main package.  This way I was able to direct anything I wanted to log to the same log file.

log = logging.getLogger(__name__)

This next function uses Boto3 to call AWS Systems Manager Parameter Store to retrieve a secure string.  Be aware that if you’re using Parameter Store to store secure strings the security principal you’re using to make the call (in my case an IAM User via Cloud9) needs to have appropriate permissions to Parameter Store and the KMS CMK.  Notice I added a line here to log the call for the parameter to help debug any failures.  Using the parameter store with Boto3 is covered in detail here.

def get_parametersParameterStore(parameterName,region):
    log.info('Request %s from Parameter Store',parameterName)
    client = boto3.client('ssm', region_name=region)
    response = client.get_parameter(
        Name=parameterName,
        WithDecryption=True
    )
    return response['Parameter']['Value']

The last function in this module again uses Boto3 to upload the file to an Amazon S3 bucket with a specific prefix.  Using S3 is covered in detail here.

def put_s3(bucket,prefix,region,filename):
    s3 = boto3.client('s3', region_name=region)
    s3.upload_file(filename,bucket,prefix + "/" + filename)

Next up is the graphapi.py module.  In the first few lines I again import the necessary modules as well as the AuthenticationContext module from ADAL.  This module contains the AuthenticationContext class which is going to get the OAuth 2.0 access token needed to authenticate to the MS Graph API.

import json
import requests
import logging
from adal import AuthenticationContext

log = logging.getLogger(__name__)

In the function below an instance of the AuthenticationContext class is created and the acquire_token_with_client_credentials method is called.   It uses the OAuth 2.0 Client Credentials grant type which allows the script to access the MS Graph API without requiring a user context.  I’ve already gone ahead and provisioned and authorized the script with an identity in Azure AD and granted it the appropriate access scopes.

Behind the scenes Azure AD (authorization server in OAuth-speak) is contacted and the script (client in OAuth-speak) passes a unique client id and client secret.  The client id and client secret are used to authenticate the application to Azure AD which then looks within its directory to determine what resources the application is authorized to access (scope in OAuth-speak).  An access token is then returned from Azure AD which will be used in the next step.

def obtain_accesstoken(tenantname,clientid,clientsecret,resource):
    auth_context = AuthenticationContext('https://login.microsoftonline.com/' +
        tenantname)
    token = auth_context.acquire_token_with_client_credentials(
        resource=resource,client_id=clientid,
        client_secret=clientsecret)
    return token

A properly formatted header is created and the access token is included. The function checks to see if the q_param parameter has a value and it if does it passes it as a dictionary object to the Python Requests library which includes the key values as query strings. The request is then made to the appropriate endpoint. If the response code is anything but 200 an exception is raised, written to the log, and the script terminates.  Assuming a 200 is received the Python JSON library is used to parse the response.  The JSON content is searched for an attribute of @odata.nextLink which indicates the results have been paged.  The function handles it by looping until there are no longer any paged results.  It additionally combines the paged results into a single JSON array to make it easier to work with moving forward.

def makeapirequest(endpoint,token,q_param=None):
 
    headers = {'Content-Type':'application/json', \
    'Authorization':'Bearer {0}'.format(token['accessToken'])}

    log.info('Making request to %s...',endpoint)
        
    if q_param != None:
        response = requests.get(endpoint,headers=headers,params=q_param)
        print(response.url)
    else:
        response = requests.get(endpoint,headers=headers)    
    if response.status_code == 200:
        json_data = json.loads(response.text)
            
        if '@odata.nextLink' in json_data.keys():
            log.info('Paged result returned...')
            record = makeapirequest(json_data['@odata.nextLink'],token)
            entries = len(record['value'])
            count = 0
            while count < entries:
                json_data['value'].append(record['value'][count])
                count += 1
        return(json_data)
    else:
        raise Exception('Request failed with ',response.status_code,' - ',
            response.text)

Lastly there is main.py which stitches the script together.  The first section adds the modules we’ve already covered in addition to the argparse library which is used to handle arguments added to the execution of the script.

import json
import requests
import logging
import time
import graphapi
import awsintegration
from argparse import ArgumentParser

A simple configuration for the logging module is setup instructing it to write to the msapiquery.log using a level of INFO and applies a standard format.

logging.basicConfig(filename='msapiquery.log', level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

This chunk of code creates an instance of the ArgumentParser class and configures two arguments.  The sourcefile argument is used to designate the JSON parameters file which contains all the necessary information.

The parameters file is then opened and processed.  Note that the S3 parameters are only pulled in if the –s3 switch was used.

parser = ArgumentParser()
parser.add_argument('sourcefile', type=str, help='JSON file with parameters')
parser.add_argument('--s3', help='Write results to S3 bucket',action='store_true')
args = parser.parse_args()

try:
    with open(args.sourcefile) as json_data:
        d = json.load(json_data)
        tenantname = d['parameters']['tenantname']
        resource = d['parameters']['resource']
        endpoint = d['parameters']['endpoint']
        filename = d['parameters']['filename']
        aws_region = d['parameters']['aws_region']
        q_param = d['parameters']['q_param']
        clientid_param = d['parameters']['clientid_param']
        clientsecret_param = d['parameters']['clientsecret_param']
        if args.s3:
            bucket = d['parameters']['bucket']
            prefix = d['parameters']['prefix']

Next up the get_parametersParameterStore function from the awsintegration module is executed twice.  Once to get the client id and once to get the client secret.  Note that the get_parameters method for Boto3 Systems Manager client could have been used to get both of the parameters in a single call, but I didn’t go that route.

    logging.info('Attempting to contact Parameter Store...')
    clientid = awsintegration.get_parametersParameterStore(clientid_param,aws_region)
    clientsecret = awsintegration.get_parametersParameterStore(clientsecret_param,aws_region)

In these next four lines the access token is obtained by calling the obtain_accesstoken function and the request to the MS Graph API is made using the makeapirequest function.

    logging.info('Attempting to obtain an access token...')
    token = graphapi.obtain_accesstoken(tenantname,clientid,clientsecret,resource)

    logging.info('Attempting to query %s ...',endpoint)
    data = graphapi.makeapirequest(endpoint,token,q_param)

This section creates a string representing the current day, month, and year and prepends the filename that was supplied in the parameters file.  The file is then opened using the with statement.  If you’re familiar with the using statement from C# the with statement is similar in that it ensures resources are cleaned up after being used.

Before the data is written to file, I remove the @odata.nextLink key if it’s present.  This is totally optional and just something I did to pretty up the results.  The data is then written to the file as raw text by using the Python JSON encoder/decoder.

    logging.info('Attempting to write results to a file...')
    timestr = time.strftime("%Y-%m-%d")
    filename = timestr + '-' + filename
    with open(filename,'w') as f:
        
        ## If the data was paged remove the @odata.nextLink key
        ## to clean up the data before writing it to a file

        if '@odata.nextLink' in data.keys():
            del data['@odata.nextLink']
        f.write(json.dumps(data))

Finally, if the s3 argument was passed when the script was run, the put_s3 method from the awsintegration module is run and the file is uploaded to S3.

    logging.info('Attempting to write results to %s S3 bucket...',bucket)
    if args.s3:
        awsintegration.put_s3(bucket,prefix,aws_region,filename)

Exceptions thrown anywhere in the script are captured here written to the log file.  I played around a lot with a few different ways of handling exceptions and everything was so interdependent that if there was a failure it was best for the script to stop altogether and inform the user.  Naftali Harris has an amazing blog that walks through the many different ways of handling exceptions in Python and the various advantages and disadvantages.  It’s a great read.

except Exception as e:
    logging.error('Exception thrown: %s',e)
    print('Error running script.  Review the log file for more details')

So that’s what the code is.  Let’s take a quick look at the parameters file below.  It’s very straight forward.  Keep in mind both the bucket and prefix parameters are only required when using the –s3 option.  Here are some details on the other options:

  • The tenantname attribute is the DNS name of the Azure AD tenant being queries.
  • The resource attribute specifies the resource the access token will be used for.  If you’re going to be hitting the MS Graph API, more than likely it will be https://graph.microsoft.com
  • The endpoint attribute specifies the endpoint the request is being made to including any query strings you plan on using
  • The clientid_param and clientsecret_param attributes are the AWS Systems Manager Parameter Store parameter names that hold the client id and client secret the script was provisioned from Azure AD
  • The q_param attribute is an array of key value pairs intended to story OData query strings
  • The aws_region attribute is the region the S3 bucket and parameter store data is stored in
  • The filename attribute is the name you want to set for the file the script will produce
{
    "parameters":{
        "tenantname": "mytenant.com",
        "resource": "https://graph.microsoft.com",
        "endpoint": "https://graph.microsoft.com/beta/auditLogs/signIns",
        "clientid_param":"myclient_id",
        "clientsecret_param":"myclient_secret",
        "q_param":{"$filter":"createdDateTime gt 2019-01-09"},
        "aws_region":"us-east-1",
        "filename":"sign_in_logs.json",
        "bucket":"mybucket",
        "prefix":"myprefix"
    }
}

Now that the script has been covered, let’s see it action.  First I’m going to demonstrate how it handles paging by querying the MS Graph API endpoint to list out the users in the directory.  I’m going to append the $select query parameter and set it to return just the user’s id to make the output more simple and set the $top query parameter to one to limit the results to one user per page.  The endpoint looks like this https://graph.microsoft.com/beta/users?$top=1&select=id.

I’ll be running the script from an instance of Cloud9.  The IAM user I’m using with AWS has appropriate permissions to the S3 bucket, KMS CMK, and parameters in the parameter store.  I’ve set each of the parameters in the parameters file to the appropriate values for the environment I’ve configured.  I’ll additionally be using the –s3 option.

 

run_script.png

Once the script is complete it’s time to look at the log file that was created.  As seen below each step in the script to aid with debugging if something were to fail.  The log also indicates the results were paged.

log

The output is nicely formatted JSON that could be further transformed or fed into something like Amazon Athena for further analysis (future post maybe?).

json.png

Cool right?  My original use case was sign-in logs so let’s take a glance that.  Here I’m going to use an endpoint of https://graph.microsoft.com/beta/auditLogs/signIns with a OData filter option of createdDateTime gt 2019-01-08 which will limit the data returned to today’s sign-ins.

In the logs we see the script was successfully executed and included the filter specified.

graphapi_log_sign.png

The output is the raw JSON of the sign-ins over the past 24 hours.  For your entertainment purposes I’ve included one of the malicious sign-ins that was captured.  I SO can’t wait to examine this stuff in a BI tool.

sign_in_json

Well that’s it folks.  It may be ugly, but it works!  This was a fun activity to undertake as a first stab at making something useful in Python.  I especially enjoyed the lack of documentation available on this integration.  It really made me dive deep and learn things I probably wouldn’t have if there were a billion of examples out there.

I’ve pushed the code to Github so feel free to muck around with it to your hearts content.