Welcome back folks!
In this post I’ll be continuing my series on how Azure Monitor can be used to visualize log data generated by other cloud services. In my last post I covered the challenges that multicloud brings and what Azure can do to help with it. I also gave an overview of Azure Monitor and covered the design of the demo I put together and will be walking through in this post. Please take a read through that post if you haven’t already. If you want to follow along, I’ve put the solution up on Github.
Let’s quickly review the design of the solution.

This solution uses some simple Python code to pull information about the usage of AWS IAM User access id and secret keys from an AWS account. The code runs via a Lambda and stores the Azure Log Analytics Workspace id and key in environment variables of the Lambda that are encrypted with an AWS KMS key. The data is pulled from the AWS API using the Boto3 SDK and is transformed to JSON format. It’s then delivered to the HTTP Data Collector API which places it into the Log Analytics Workspace. From there, it becomes available to Azure Monitor to query and visualize.
Setting up an Azure environment for this integration is very simple. You’ll need an active Azure subscription. If you don’t have one, you can setup a free Azure account to play around. Once you’re set with the Azure subscription, you’ll need to create an Azure Log Analytics Workspace. Instructions for that can be found in this Microsoft article. After the workspace has been setup, you’ll need to get the workspace id and key as referenced in the Obtain workspace ID and key section of this Microsoft article. You’ll use this workspace ID and key to authenticate to the HTTP Data Collector API.
If you have a sandbox AWS account and would like to follow along, I’ve included a CloudFormation template that will setup the AWS environment. You’ll need to have an AWS account with sufficient permissions to run the template and provision the resources. Prior to running the template, you will need to zip up the lambda_function.py and put it on an AWS S3 bucket you have permissions on. When you run the template you’ll be prompted to provide the S3 bucket name, the name of the ZIP file, the Log Analytics Workspace ID and key, and the name you want the API to assign to the log in the workspace.
The Python code backing the solution is pretty simple. It uses all standard Python modules except for the boto3 module used to interact with AWS.
import json
import logging
import re
import csv
import boto3
import os
import hmac
import base64
import hashlib
import datetime
from io import StringIO
from datetime import datetime
from botocore.vendored import requests
The first function in the code parses the ARN (Amazon Resource Name) to extract the AWS account number. This information is later included in the log data written to Azure.
# Parse the IAM User ARN to extract the AWS account number
def parse_arn(arn_string):
acct_num = re.findall(r'(?<=:)[0-9]{12}',arn_string)
return acct_num[0]
The second function uses the strftime method to transform the timestamp returned from the AWS API to a format that the Azure Monitor API will detect as a timestamp and make that particular field for each record in the Log Analytics Workspace a datetime type.
# Convert timestamp to one more compatible with Azure Monitor
def transform_datetime(awsdatetime):
transf_time = awsdatetime.strftime("%Y-%m-%dT%H:%M:%S")
return transf_time
The next function queries the AWS API for a listing of AWS IAM Users setup in the account and creates dictionary object representing data about that user. That object is added to a list which holds each object representing each user.
# Query for a list of AWS IAM Users
def query_iam_users():
todaydate = (datetime.now()).strftime("%Y-%m-%d")
users = []
client = boto3.client(
'iam'
)
paginator = client.get_paginator('list_users')
response_iterator = paginator.paginate()
for page in response_iterator:
for user in page['Users']:
user_rec = {'loggedDate':todaydate,'username':user['UserName'],'account_number':(parse_arn(user['Arn']))}
users.append(user_rec)
return users
The query_access_keys function queries the AWS API for a listing of the access keys that have been provisioned the AWS IAM User as well as the status of those keys and some metrics around the usage. The resulting data is then added to a dictionary object and the object added to a list. Each item in the list represents a record for an AWS access id.
# Query for a list of access keys and information on access keys for an AWS IAM User
def query_access_keys(user):
keys = []
client = boto3.client(
'iam'
)
paginator = client.get_paginator('list_access_keys')
response_iterator = paginator.paginate(
UserName = user['username']
)
# Get information on access key usage
for page in response_iterator:
for key in page['AccessKeyMetadata']:
response = client.get_access_key_last_used(
AccessKeyId = key['AccessKeyId']
)
# Santize key before sending it along for export
sanitizedacctkey = key['AccessKeyId'][:4] + '...' + key['AccessKeyId'][-4:]
# Create new dictonionary object with access key information
if 'LastUsedDate' in response.get('AccessKeyLastUsed'):
key_rec = {'loggedDate':user['loggedDate'],'user':user['username'],'account_number':user['account_number'],
'AccessKeyId':sanitizedacctkey,'CreateDate':(transform_datetime(key['CreateDate'])),
'LastUsedDate':(transform_datetime(response['AccessKeyLastUsed']['LastUsedDate'])),
'Region':response['AccessKeyLastUsed']['Region'],'Status':key['Status'],
'ServiceName':response['AccessKeyLastUsed']['ServiceName']}
keys.append(key_rec)
else:
key_rec = {'loggedDate':user['loggedDate'],'user':user['username'],'account_number':user['account_number'],
'AccessKeyId':sanitizedacctkey,'CreateDate':(transform_datetime(key['CreateDate'])),'Status':key['Status']}
keys.append(key_rec)
return keys
The next two functions contain the code that creates and submits the request to the Azure Monitor API. The product team was awesome enough to provide some sample code in the in the public documentation for this part. The code is intended for Python 2 but only required a few small changes to make it compatible with Python 3.
Let’s first talk about the build_signature function. At this time the API uses HTTP request signing using the Log Analytics Workspace id and key to authenticate to the API. In short this means you’ll have two sets of shared keys per workspace, so consider the workspace your authorization boundary and prioritize proper key management (aka use a different workspace for each workload, track key usage, and rotate keys as your internal policies require).
Breaking down the code below, we the string that will act as the header includes the HTTP method, length of request content, a custom header of x-ms-date, and the REST resource endpoint. The string is then converted to a bytes object, and an HMAC is created using SHA256 which is then base-64 encoded. The result is the authorization header which is returned by the function.
def build_signature(customer_id, shared_key, date, content_length, method, content_type, resource):
x_headers = 'x-ms-date:' + date
string_to_hash = method + "\n" + str(content_length) + "\n" + content_type + "\n" + x_headers + "\n" + resource
bytes_to_hash = bytes(string_to_hash, encoding="utf-8")
decoded_key = base64.b64decode(shared_key)
encoded_hash = base64.b64encode(
hmac.new(decoded_key, bytes_to_hash, digestmod=hashlib.sha256).digest()).decode()
authorization = "SharedKey {}:{}".format(customer_id,encoded_hash)
return authorization
Not much needs to be said about the post_data function beyond that it uses the Python requests module to post the log content to the API. Take note of the limits around the data that can be included in the body of the request. Key takeaways here is if you plan pushing a lot of data to the API you’ll need to chunk your data to fit within the limits.
def post_data(customer_id, shared_key, body, log_type):
method = 'POST'
content_type = 'application/json'
resource = '/api/logs'
rfc1123date = datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
content_length = len(body)
signature = build_signature(customer_id, shared_key, rfc1123date, content_length, method, content_type, resource)
uri = 'https://' + customer_id + '.ods.opinsights.azure.com' + resource + '?api-version=2016-04-01'
headers = {
'content-type': content_type,
'Authorization': signature,
'Log-Type': log_type,
'x-ms-date': rfc1123date
}
response = requests.post(uri,data=body, headers=headers)
if (response.status_code >= 200 and response.status_code <= 299):
print("Accepted")
else:
print("Response code: {}".format(response.status_code))
Last but not least we have the lambda_handler function which brings everything together. It first gets a listing of users, loops through each user to information about the access id and secret keys usage, creates a log record containing information about each key, converts the data from a dict to a JSON string, and writes it to the API. If the content is successfully delivered, the log for the Lambda will note that it was accepted.
def lambda_handler(event, context):
# Enable logging to console
logging.basicConfig(level=logging.INFO,format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
try:
# Initialize empty records array
#
key_records = []
# Retrieve list of IAM Users
logging.info("Retrieving a list of IAM Users...")
users = query_iam_users()
# Retrieve list of access keys for each IAM User and add to record
logging.info("Retrieving a listing of access keys for each IAM User...")
for user in users:
key_records.extend(query_access_keys(user))
# Prepare data for sending to Azure Monitor HTTP Data Collector API
body = json.dumps(key_records)
post_data(os.environ['WorkspaceId'], os.environ['WorkspaceKey'], body, os.environ['LogName'])
except Exception as e:
logging.error("Execution error",exc_info=True)
Once the data is delivered, it will take a few minutes for it to be processed and appear in the Log Analytics Workspace. In my tests it only took around 2-5 minutes, but I wasn’t writing much data to the API. After the data processes you’ll see a new entry under the listing of Custom Logs in the Log Analytics Workspace. The entry will be the log name you picked and with a _CL at the end. Expanding the entry will display the columns that were created based upon the log entry. Note that the columns consumed from the data you passed will end with an underscore and a character denoting the data type.

Now that the data is in the workspace, I can start querying it and creating some visualizations. Azure Monitor uses the Kusto Query Language (KQL). If you’ve ever created queries in Splunk, the language will feel familiar.
The log I created in AWS and pushed to the API has the following schema. Note the addition of the underscore followed by a character denoting the column data type.
- logged_Date (string) – The date the Lambda ran
- user_s (string) – The AWS IAM User the key belongs to
- account_number_s (string) – The AWS Account number the IAM Users belong to
- AccessKeyId (string) – The id of the access key associated with the user which has been sanitized to show just the first 4 and last 4 characters
- CreateDate_t (timestamp) – The date and time when the access key was created
- LastUsedDate_t (timestamp) – The date and time the key was last used
- Region_s (string) – The region where the access key was last used
- Status_s (string) – Whether the key is enabled or disabled
- ServiceName_s (string) – The AWS service where the access key was last used
In addition to what I’ve pushed, Azure Monitor adds a TimeGenerated field to each record which is the time the log entry was sent to Azure Monitor. You can override this behavior and provide a field for Azure Monitor to use for this if you like (see here). There are some other miscellaneous fields are inherited from whatever schema the API is drawing from. These are fields such as TenantId and SourceSystem, which in this case is populated with RestAPI.
Since my personal AWS environment is quite small and the AWS IAM Users usage are very limited, my data sets aren’t huge. To address this I created a number of IAM Users with access keys for the purpose blog. I’m getting that out of the way so my AWS friends don’t hate on me. 🙂
One of core best practices in key management with shared keys is to ensure you rotate them. The first data point I wanted to extract was which keys that existed in my AWS account were over 90 days old. To do that I put together the following query:
AWS_Access_Key_Report_CL
| extend key_age = datetime_diff('day',now(),CreateDate_t)
| project Age=key_age,AccessKey=AccessKeyId_s, User=user_s
| where Age > 90
| sort by Age
Let’s walk through the query. The first line tells the query engine to run this query against the AWS_Access_Key_Report_CL. The next line creates a new field that contains the age of the key by determining the amount of time that has passed between the creation date of the key and today’s date. The line after that instructs the engine to pull back only the key_age field I just created and the AccessKeyId_s, user_s , and status_s fields. The results are then further culled down to pull only records where the key age is greater than 90 days and finally the results are sorted by the age of the key.

Looks like it’s time to rotate that access key in use by Azure AD. 🙂
I can then pin this query to a new shared dashboard for other users to consume. Cool and easy right? How about we create something visual?
Looking at the trends in access key creation can provide some valuable insights into what is the norm and what is not. Let’s take a look a the metrics for key creation (of the keys still exist in an enabled/disabled state). For that I’m going to use the following query:
AWS_Access_Key_Report_CL
| make-series AccessKeys=count() default=0 on CreateDate_t from datetime(2019-01-01) to datetime(2020-01-01) step 1d
In this query I’m using the make-series operator to count the number of access keys created each day and assigning a default value of 0 if there are no keys created on that date. The result of the query isn’t very useful when looking at it in tabular form.

By selecting the Line drop down box, I can transform the date into a line grab which shows me spikes of creation in log creation. If this was real data, investigation into the spike of key creations on 6/30 may be warranted.

I put together a few other visuals and tables and created a custom dashboard like the below. Creating the dashboard took about an hour so, with much of the time invested in figuring out the query language.

What you’ve seen here is a demonstration of the power and simplicity of Azure Monitor. By adding a simple to use API, Microsoft has exponentially increased the agility of the tool by allowing it to become a single pane of glass for monitoring across clouds. It’s also worth noting that Microsoft’s BI (business intelligence) tool Power BI has direct integration with Azure Log Analytics. This allows you to pull that log data into PowerBI and perform more in-depth analysis and to create even richer visualizations.
Well folks, I hope you’ve found this series of value. I really enjoyed creating it and already have a few additional use cases in mind. Make sure to follow me on Github as I’ll be posting all of the code and solutions I put together there for your general consumption.
Have a great day!