Setting up a Python Coding Environment

Welcome back folks!

Like many of my fellow veteran men and women in tech, I’ve been putting in the effort to evolve my skill set and embrace the industry’s shift to a more code focused world.  Those of us who came from the “rack and stack” generation did some scripting here and there where it workable using VB, Bash, Batch, Perl, or the many other languages who have had their time in the limelight.  The concept of a development lifecycle and code repository typically consisted of a few permanently open Notepad instances or if you were really fancy, scripts saved to a file share with files labeled v1, v2, and so on.  Times have changed we must change with them.

Over the past two years I’ve done significantly more coding.  These efforts ranged from creating infrastructure using Microsoft ARM (Azure Resource Manager) and AWS CloudFormation templates to embracing serverless with Azure Functions and AWS Lambdas.  Through this process I’ve quickly realized that the toolsets available to manage code and its lifecycle have evolved and gotten more accessible to us “non-developers”.

I’m confident there are others like myself out there who are coming from a similar background and I wanted to put together a post that might help others begin or move forward with their own journeys.  So for this post I’m going to cover how to setup a Visual Studio Code environment on a Mac for developing code using Python.

With the introduction done, let’s get to it!

First up you’ll need to get Python installed.  The Windows installation is pretty straightforward and can be downloaded here.  Macs are a bit tricker because OS X ships with Python 2.7 by default.  You can validate this by running python –version from the terminal.  What this means is you’ll need to install Python 3.7 in parallel.  Thankfully the process is documented heavily by others who are far more knowledgable than me.  William Vincent some wonderful instructions.

Once Python 3.7 is installed, we’ll want to setup our IDE (integrated development environment).  I’m partial to VSC (Visual Studio Code) because it’s free, cross platform, and simple to use.  Installation is straightforward so I won’t be covering those steps.

Well you have your interpreter and your IDE but you need a good solution to store and track changes to the code you’re going to put together.  Gone are the days of managing it by saving copies (if you even got that far) to your desktop and arrived are the days of Git.  You can roll your own Git service or use a managed service.  Since I’m a newbie, I’ve opted to go mainstream and simple with Github.  A free account should more than suffice unless you’re planning on doing something that requires a ton of collaboration.

Now that your account is setup, let’s go through the process of creating a simple Python script, creating a new repository, committing the code, and pushing it up to Github.  We’ll first want to create a new workspace in VSC.  One of the benefits of a workspace is you can configure settings on a per project basis vs modifying the settings of the VSC as a whole.

To do this open VSC and create a new empty file using the New file shortcut as seen below.

Screen Shot 2019-06-18 at 9.11.34 PM.png

Once the new window is opened, you can then choose Save Workspace as from the File context menu.  Create a new directory for the project (I’ll refer to this as the project directory) and save the workspace to that folder.  Create a subfolder under the workspace (I’ll refer to this as the working directory).

We’ll now want to initialize the local repository.  We can do this by using the shortcut Command+Shift+P which will open the command pallet in VSC.  Search for Git, choose Git: Initialize Repository, and select the working directory.   You’ll be prompted to add the folder to the workspace which you’ll want to do.

Screen Shot 2019-06-18 at 9.26.11 PM.png

VSC will begin tracking changes to files you put in the folder and the Source Control icon will now be active.

Screen Shot 2019-06-18 at 9.27.28 PM.png

Let’s now save the new file we created as hello-world.py.  The py extension tells VSC that this is Python code and you’ll yield a number of benefits such as IntelliSense.  If you navigate back to the Source Control you’ll see there are uncommitted changes from the new hello-world.py file.  Let’s add the classic line of code to print Hello World.  To execute the code we’ll choose the Start Without Debugging option from the Debug context menu.

Screen Shot 2019-06-18 at 9.35.45 PM.png

The built in Python libraries will serve you well, but there are a TON of great libraries out there you’ll most certainly want to use.  Wouldn’t it be wonderful if you could have separate instances of the interpreter with specific libraries?  It comes the awesomeness of virtual environments.  Using them isn’t required but it is best practice in the Python world and will make your life a lot easier.

Creating a new virtual environment is easy.

  1. Open a new terminal in Visual Studio Code, navigate to your working directory, and create a new folder named envs.
  2. Create the new virtual environment using the command below.
    python3 -m venv ./envs

You’ll now be able select the virtual environment for use in the bottom left hand corner of VSC as seen below.

Screen Shot 2019-06-14 at 9.31.43 PM.png

After you select it, close out the terminal window and open a new one in VSC by selecting New Terminal from the Terminal context menu.  You’ll notice the source command is run to select the virtual environment.  You can now add new libraries using pip (Python’s package manager) as needed and they will be added to the virtual environment you created.

If you go back to the source control menu you’ll notice there a whole bunch of new files.  Essentially Git is trying to track all of the files within the virtual environment.  You’ll want to have Git ignore it by creating a file name .gitignore file.  Within the file we’ll add two entries, one for the ignore file and one for the virtual environment directory (and a few others if you have some hidden files like Mac’s .DS_Store).

Screen Shot 2019-06-18 at 10.21.13 PM

Let’s now commit the new file hello-world.py to the local repository.  Accompanying the changes, you’ll also add a message about what has changed in the code.  There is a whole art around good commit messages which you can research on the web.  Most of my stuff is done solo, so it’s simple short messages to remind me of what I’ve done.   You can make your Git workflows more sophisticated as outlined here, but for very basic development purposes a straight commit to the master works.

Now that we have the changes committed to our local repository, let’s push them up to a new remote repository in Github.  First you’ll want to create an empty repository.  To add data to the repo, you’ll need to authenticate.  I’ve added two-factor authentication to my Github account, which it doesn’t look like Visual Studio Code supports at this time.  To work around the limitation you can create personal access tokens.  Not a great solution, but it will suffice as long as you practice good key management and create the tokens with a limited authorization scope and limit their lifetime.

Once your repository is set and you’ve created your access token, you can push to the remote repository.  In Visual Studio Code run Command+Shift+P to open the command pallet and find Git: Add Remote command to add the repository.  Provide a name (I simply used origin, seems like the common name) as the name and provide the URL of your repository.  You’ll then be prompted to authentication.  Provide your Github username and the personal access token for the password.   Your changes will be pushed to the repository.

There you have it folks!  I’m sure there are better ways to orchestrate this process, but this is what’s working for me.  If you have alternative methods and shortcuts, I’d love to hear about them.

Have a great week!

Passing the AZ-300

Hello all!

Over the past year I’ve been buried in Amazon Web Services (AWS), learning the platform, and working through the certification paths.  As part of my new role at Microsoft, I’ve been given the opportunity to pursue the Microsoft Certified: Azure Solutions Architect Expert.  In the world of multi-cloud who doesn’t want to learn multiple platforms? 🙂

The Microsoft Certified: Azure Solutions Architect Expert certification is part of Microsoft’s new set of certifications.  If you’re already familiar with the AWS Certification track, the new Microsoft track is very similar in that it has three paths.  These paths are Developer, Administrator, and Architect.  Each path consists of two exams, again similar to AWS’s structure of Associate and Professional.

Even though the paths are similar the focus and structure of the first tier of exams for the Microsoft exams differ greatly from the AWS Associate exams.  The AWS exams are primarily multiple choice while the Microsoft first level of exams consists of multiple choice, drag and drop, fill in the blank, case studies, and emulated labs.  Another difference between the two is the AWS exams focus greatly on how the products work and when and where to use each product.  The Microsoft first level exams focus on those topics too, but additionally test your ability to implement the technologies.

When I started studying for the AZ-300 – Microsoft Azure Architect Technologies two weeks I had a difficult time finding good study materials because the exam is so new and has changed a few times since Microsoft released it last year.  Google searches brought up a lot of illegitimate study materials (brain dumps) but not much in the way of helpful materials beyond the official Azure documentation.  After passing the exam this week, I wanted to give back to the community and provide some tips, links, and the study guide I put together to help prepare for the exam.

To prepare for an exam I have a standard routine.

  1. I first start with referencing the official exam requirements.
  2. From there, I take one or two on-demand training classes.  I watch each lesson in a module at 1.2x speed (1x always seems to slow which I think is largely due to living in Boston where we tend to talk very quickly).  I then go back through each module at 1.5x to 2.0x taking notes on paper.  I then type up the notes and organize them into topics.
  3. Once I’m done with the training I’ll usually dive deep into the official documentation on the subjects I’m weak on or that I find interesting.
  4. During the entirety of the learning process I will build out labs to get a feel for implementation and operation of the products.
  5. I wrap it up by adding the additional learnings from the public documentation and labs into my digital notes.  I then pull out the key concepts from the digital notes and write up flash cards to study.
  6. Practice makes perfect and for that I will leverage legitimate practice exams (braindumps make the entire exercise a pointless waste of time and degrade the value of the certification) like those offered from MeasureUp.

Yes, I’m a bit nuts about my studying process but I can assure you it works and you will really learn the content and not just memorize it.

From a baseline perspective, my experience with Microsoft’s cloud services were primarily in Azure Active Directory and Azure Information Protection.  For Azure I had built some virtual networks with virtual machines in the past, but nothing more than that.  I have a pretty solid foundation in AWS and cloud architectural patterns which definitely came in handy since the base offerings of each of the cloud providers are fairly similar.

For on-demand training A Cloud Guru has always been my go to.  Unfortunately, their Azure training options aren’t as robust as the AWS offerings, but Nick Coyler’s AZ-300 course is solid.  It CANNOT be your sole source of material but as with most training from the site, it will give you the 10,000 ft view.  Once I finished with A Cloud Guru, I moved on to UdemyScott Duffy’s AZ-300 course does not have close to the detail of Nick’s course, but provides a lot more hands-on activities that will get you working with the platform via the GUI and the CLI.  Add both courses together and you’ll cover a good chunk of the exam.

The courses themselves are not sufficient to pass the exam.  They will give you the framework, but docs.microsoft.com is your best friend.  There is the risk you can dive more deep into the product than you need to, but reference back to the exam outline to keep yourself honest.  Hell, worst case scenario is you learn more than you need to learn. 🙂  Gregor Suttie put together a wonderful course outline with links to the official documentation that will help you target key areas of the public documentation.

Perhaps most importantly, you need to lab.  Then lab again.  Lab once more, and then another time.  Run through the Quickstarts and Tutorials on docs.microsoft.com.  Get your hands dirty with the CLI, PowerShell, and the Portal.  You don’t have to be an expert, but you’ll want to understand the basics and the general syntax of both the CLI and PowerShell.  You will have fully interactive labs where you’ll need to implement the products given a set of requirements.

Finally, I’ve added the study guides I put together to my github.  I make no guarantees that the data is up to date or even that there aren’t mistakes in some of the content.  Use it as an artifact to supplement your studies as you prepare your own study guide.

Summing it up, don’t just look at the exam as a piece of virtual paper.  Look at it as an opportunity to learn and grow your skill set.  Take the time to not just memorize, but understand and apply what you learn.  Be thankful you work an industry where things change and provides you with the opportunity to learn something new and exercise that big brain of yours.

I wish you the best of luck in your studies and if you have additional materials or a website you’ve found helpful, please comment below.

Thanks!

 

 

 

 

 

 

 

Some updates…

Hello folks!  Life has been busy with some wonderful work and some big career changes.  As some of you may know, I moved on from my role at the Federal Reserve last summer.  While I loved the job, the people, and the organization, I wanted to try something new and different.

I was lucky enough to have the opportunity to work for one of the big three cloud providers in a security-focused professional services role supporting public sector customers.  The role was amazing, I learned a TON about a cloud platform I had never worked with, interacted with some of the smartest people I’ve ever met, and had the chance to help architect and implement some really awesome environments for some stellar customers.

Unfortunately the travel started taking a toll on my personal life and family time.  I made the tough decision to move on and find something that was a bit more regional and less travel.  I struck the lottery once more and in April started as a Cloud Solution Architect at Microsoft focusing on Infrastructure and Security in Azure.  I’ve once again been drinking from multiple firehoses and learning my second cloud platform.  It’s been a ball so far and I’m extremely excited learn and contribute to Microsoft’s mission to empower every person and every organization on the planet to achieve more!

Expect a lot more activity on this blog as I share my experiences and my learnings with the wider tech community.  It’s going to be a fun ride!

Capturing and Visualizing Office 365 Security Logs – Part 2

Capturing and Visualizing Office 365 Security Logs – Part 2

Hello again my fellow geeks.

Welcome to part two of my series on visualizing Office 365 security logs.  In my last post I walked through the process of getting the sign-in and security logs and provided a link to some Lambda’s I put together to automate pulling them down from Microsoft Graph.  Recall that the Lambda stores the files in raw format (with a small bit of transformation on the time stamps) into Amazon S3 (Simple Storage Service).  For this demonstration I modified the parameters for the Lambda to download the 30 days of the sign-in logs and to store them in an S3 bucket I use for blog demos.

When the logs are pulled from  Microsoft Graph they come down in JSON (JavaScript Object Notation) format.  Love JSON or hate it is the common standard for exchanging information these days.  The schema for the JSON representation of the sign-in logs is fairly complex and very nested because there is a ton of great information in there.  Thankfully Microsoft has done a wonderful job of documenting the schema.  Now that we have the logs and the schema we can start working with the data.

When I first started this effort I had put together a Python function which transformed the files into a CSV using pipe delimiters.  As soon as I finished the function I wondered if there was an alternative way to handle it.  In comes Amazon Athena to the rescue with its Openx-JsonSerDe library.  After reading through a few blogs (great AWS blog here), StackOverflow posts, and the official AWS documentation I was ready to put something together myself.  After some trial and error I put together a working DDL (Data Definition Language) statement for the data structure.  I’ve made the DDLs available on Github.

Once I had the schema defined, I created the table in Athena.  The official AWS documentation does a fine job explaining the few clicks that are provided to create a table, so I won’t re-create that here.  The DDLs I’ve provided you above will make it a quick and painless process for you.

Let’s review what we’ve done so far.  We’ve setup a reoccurring job that is pulling the sign-in and audit logs via the API and is dumping all that juicy data into cheap object storage which we can further enforce lifecycle policies for.  We’ve then defined the schema for the data and have made it available via standard SQL queries.  All without provisioning a server and for pennies on the dollar.  Not to shabby!

At this point you can use your analytics tool of choice whether it be QuickSight, Tableau, PowerBi, or the many other tools that have flooded the market over the past few years.  Since I don’t make any revenue from these blog posts, I like to go the cheap and easy route of using Amazon QuickSight.

After completing the initial setup of QuickSight I was ready to go.  The next step was to create a new data set.  For that I clicked the Manage Data button and selected New Data Set.

Screen Shot 2019-01-31 at 8.57.15 PM.png

On the Create a Data Set screen I selected the Athena option and created a name for the data source.

screenshot2019-01-31at9.01.48pm

From there I selected the database in Athena which for me was named azuread.  The tables within the database are then populated and I chose the tbl_signin_demo which points to the test S3 bucket I mentioned previously.

Screen Shot 2019-01-31 at 9.04.22 PM.png

Due to the complexity of the data structure I opted to use a custom SQL query.  There is no reason why you couldn’t create the table I’m about to create in Athena and then connect to that table instead to make it more consumable for a wider array of users.  It’s really up to you and I honestly don’t know what the appropriate “big data” way of doing it is.  Either way, those of you with real SQL skills may want to look away from this query lest you experience a Raiders of The Lost Ark moment.

indianjones

You were warned.

SELECT records.id, records.createddatetime, records.userprincipalname, records.userDisplayName, records.userid, records.appid, records.appdisplayname, records.ipaddress, records.clientappused, records.mfadetail.authdetail AS mfadetail_authdetail, records.mfadetail.authmethod AS mfadetail_authmethod, records.correlationid, records.conditionalaccessstatus, records.appliedconditionalaccesspolicy.displayname AS cap_displayname, array_join(records.appliedconditionalaccesspolicy.enforcedgrantcontrols,' ') AS cap_enforcedgrantcontrols, array_join(records.appliedconditionalaccesspolicy.enforcedsessioncontrols,' ') AS cap_enforcedsessioncontrols, records.appliedconditionalaccesspolicy.id AS cap_id, records.appliedconditionalaccesspolicy.result AS cap_result, records.originalrequestid, records.isinteractive, records.tokenissuername, records.tokenissuertype, records.devicedetail.browser AS device_browser, records.devicedetail.deviceid AS device_id, records.devicedetail.iscompliant AS device_iscompliant, records.devicedetail.ismanaged AS device_ismanaged, records.devicedetail.operatingsystem AS device_os, records.devicedetail.trusttype AS device_trusttype,records.location.city AS location_city, records.location.countryorregion AS location_countryorregion, records.location.geocoordinates.altitude, records.location.geocoordinates.latitude, records.location.geocoordinates.longitude,records.location.state AS location_state, records.riskdetail, records.risklevelaggregated, records.risklevelduringsignin, records.riskstate, records.riskeventtypes, records.resourcedisplayname, records.resourceid, records.authenticationmethodsused, records.status.additionaldetails, records.status.errorcode, records.status.failurereason  FROM "azuread"."tbl_signin_demo" CROSS JOIN (UNNEST(value) as t(records))

This query will de-nest the data and give you a detailed (possibly extremely large depending on how much data you are storing) parsed table. I was now ready to create some data visualizations.

The first visual I made was a geospatial visual using the location data included in the logs filtered to failed logins. Not surprisingly our friends in China have shown a real interest in my and my wife’s Office 365 accounts.

screenshot2019-01-31at9.26.24pm

Next up I was interested in seeing if there were any patterns in the frequency of the failed logins.  For that I created a simple line chart showing the number of failed logins per user account in my tenant.  Interestingly enough the new year meant back to work for more than just you and me.

screenshot2019-01-31at9.28.45pm

Like I mentioned earlier Microsoft provides a ton of great detail in the sign-in logs.  Beyond just location, they also provide reasons for login failures.  I next created a stacked bar chat to show the different reasons for failed logs by user.  I found the blocked sign-ins by malicious IPs interesting.  It’s nice to know that is being tracked and taken care of.

screenshot2019-01-31at9.31.24pm

Failed logins are great, but the other thing I was interested in is successful logins and user behavior.  For this I created a vertical stacked bar chart that displayed the successful logins by user by device operating system (yet more great data captured in the logs).  You can tell from the bar on the right my wife is a fan of her Mac!

screenshot2019-01-31at9.38.02pm

As I gather more data I plan on creating some more visuals, but this was great to start.  The geo-spatial one is my favorite.  If you have access to a larger data set with a diverse set of users your data should prove fascinating.  Definitely share any graphs or interesting data points you end up putting together if you opt to do some of this analysis yourself.  I’d love some new ideas!

That will wrap up this series.  As you’ve seen the modern tool sets available to you now can do some amazing things for cheap without forcing you to maintain the infrastructure behind it.  Vendors are also doing a wonderful job providing a metric ton of data in their logs.  If you take the initiative to understand the product and the data, you can glean some powerful information that has both security and business value.  Even better, you can create some simple visuals to communicate that data to a wide variety of audiences making it that much more valuable.

Have a great weekend!

 

Capturing and Visualizing Office 365 Security Logs – Part 1

Welcome back again my fellow geeks!

I’ve been busy over the past month nerding out on some pet projects.  I thought it would be fun to share one of those pet projects with you.  If you had a chance to check out my last series, I walked through my first Python experiment which was to write a re-usable tool that could be used to pull data from Microsoft’s Graph API (Microsoft Graph).

For those of you unfamiliar with Microsoft Graph, it’s the Restful API (application programming interface) that is used to interact with Microsoft cloud offerings such as Office 365 and Azure.  You’ve probably been interacting with it without even knowing it if through the many PowerShell modules Microsoft has released to programmatically interact with those services.

One of the many resources which can be accessed through Microsoft Graph are Azure AD (Active Directory) security and audit reports.  If you’re using Office 365, Microsoft Azure, or simply Azure AD as an identity platform for SSO (single sign-on) to third-party applications like SalesForce, these reports provide critical security data.  You’re going to want to capture them, store them, and analyze them.  You’re also going to have to account for the window that Microsoft makes these logs available.

The challenge is they are not available via the means logs have traditionally been captured on-premises by using syslogd, installing an SIEM agent, or even Windows Event Log Forwarding.  Instead you’ll need to take a step forward in evolving the way you’re used to doing things. This is what moving to the cloud is all about.

Microsoft allows you to download the logs manually via the Azure Portal GUI (graphical user interface) or capture them by programmatically interacting with Microsoft Graph.  While the former option may work for ad-hoc use cases, it doesn’t scale.  Instead we’ll explore the latter method.

If you have an existing enterprise-class SIEM (Security Information and Event Management) solution such as Splunk, you’ll have an out of box integration.  However, what if you don’t have such a platform, your organization isn’t yet ready to let that platform reach out over the Internet, or you’re interested in doing this for a personal Office 365 subscription?  I fell into the last category and decided it would be an excellent use case to get some experience with Python, Microsoft Graph, and take advantage of some of the data services offered by AWS (Amazon Web Services).   This is the use case and solution I’m going to cover in this post.

Last year I had a great opportunity to dig into operational and security logs to extract useful data to address some business problems.  It was my first real opportunity to examine large amounts of data and to create different visualizations of that data to extract useful trends about user and application behavior.  I enjoyed the hell out of it and thought it would be fun to experiment with my own data.

I decided that my first use case would be Office 365 security logs.  As I covered in my last series my wife’s Office 365 account was hacked.  The damage was minor as she doesn’t use the account for much beyond some crafting sites (she’s a master crocheter as you can see from the crazy awesome Pennywise The Clown she made me for Christmas).

img_4301

The first step in the process was determining an architecture for the solution.  I gave myself a few requirements:

  1. The solution must not be dependent on my home lab infrastructure
  2. Storage for the logs must be cheap and readily available
  3. The credentials used in my Python code needs to be properly secured
  4. The solution must be automated and notify me of failures
  5. The data needs to be available in a form that it can be examined with an analytics solution

Based upon the requirements I decided to go the serverless (don’t hate me for using that tech buzzword 🙂 ) route.  My decisions were:

  • AWS Lambda would run my code
  • Amazon CloudWatch Events would be used to trigger the Lambda once a day to download the last 24 hours of logs
  • Amazon S3 (Simple Storage Service) would store the logs
  • AWS Systems Manager Parameter Store would store the parameters my code used leveraging AWS KMS (Key Management Service) to encrypt the credentials used to interact with Microsoft Graph
  • Amazon Athena would hold the schema for the logs and make the data queryable via SQL
  • Amazon QuickSight would be used to visualize the data by querying Amazon Athena

The high level architecture is pictured below.

untitled

I had never done a Lambda before so I spent a few days looking at some examples and doing the typical Hello World that we all do when we’re learning something new.  From there I took the framework of Python code I put together for general purpose queries to the Microsoft Graph, and adapted it into two Lambdas.  One Lambda would pull Sign-In logs while the other would pull Audit Logs.  I also wanted a repeatable way to provision the Lambdas to share with others and get some CloudFormation practice and brush up on my very dusty Bash scripting.   The results are located here in one of my Github repos.

I’m going to stop here for this post because we’ve covered a fair amount of material.  Hopefully after reading this post you understand that you have to take a new tact with getting logs for cloud-based services such as Azure AD.  Thankfully the cloud has brought us a whole new toolset we can use to automate the extraction and storage of those logs in a simple and secure manner.

In my next post I’ll walk through how I used Athena and QuickSight to put together some neat dashboards to satisfy my nerdy interests and get better insight into what’s happening on a daily basis with my Office 365 subscription.

See you next post and go Pats!

Using Python to Pull Data from MS Graph API – Part 2

Using Python to Pull Data from MS Graph API – Part 2

Welcome back my fellow geeks!

In this series I’m walking through my experience putting together some code to integrate with the Microsoft Graph API (Application Programming Interface).  In the last post I covered the logic behind this pet project and the tools I used to get it done.  In this post I’ll be walking through the code and covering what’s happening behind the scenes.

The project consists of three files.  The awsintegration.py file contains functions for the integration with AWS Systems Manager Parameter Store and Amazon S3 using the Python boto3 SDK (Software Development Kit).  Graphapi.py contains two functions.  One function uses Microsoft’s Azure Active Directory Library for Python (ADAL) and the other function uses Python’s Requests library to make calls to the MS Graph API.  Finally, the main.py file contains the code that brings everything together. There are a few trends you’ll notice with all of the code. First off it’s very simple since I’m a long way from being able to do any fancy tricks and the other is I tried to stay away from using too many third-party modules.

Let’s first dig into the awsintegration.py module.  In the first few lines above I import the required modules which include AWS’s Boto3 library.

import json
import boto3
import logging

Python has a stellar standard logging module that makes logging to a centralized location across a package a breeze.  The line below configures modules called by the main package to inherit the logging configuration from the main package.  This way I was able to direct anything I wanted to log to the same log file.

log = logging.getLogger(__name__)

This next function uses Boto3 to call AWS Systems Manager Parameter Store to retrieve a secure string.  Be aware that if you’re using Parameter Store to store secure strings the security principal you’re using to make the call (in my case an IAM User via Cloud9) needs to have appropriate permissions to Parameter Store and the KMS CMK.  Notice I added a line here to log the call for the parameter to help debug any failures.  Using the parameter store with Boto3 is covered in detail here.

def get_parametersParameterStore(parameterName,region):
    log.info('Request %s from Parameter Store',parameterName)
    client = boto3.client('ssm', region_name=region)
    response = client.get_parameter(
        Name=parameterName,
        WithDecryption=True
    )
    return response['Parameter']['Value']

The last function in this module again uses Boto3 to upload the file to an Amazon S3 bucket with a specific prefix.  Using S3 is covered in detail here.

def put_s3(bucket,prefix,region,filename):
    s3 = boto3.client('s3', region_name=region)
    s3.upload_file(filename,bucket,prefix + "/" + filename)

Next up is the graphapi.py module.  In the first few lines I again import the necessary modules as well as the AuthenticationContext module from ADAL.  This module contains the AuthenticationContext class which is going to get the OAuth 2.0 access token needed to authenticate to the MS Graph API.

import json
import requests
import logging
from adal import AuthenticationContext

log = logging.getLogger(__name__)

In the function below an instance of the AuthenticationContext class is created and the acquire_token_with_client_credentials method is called.   It uses the OAuth 2.0 Client Credentials grant type which allows the script to access the MS Graph API without requiring a user context.  I’ve already gone ahead and provisioned and authorized the script with an identity in Azure AD and granted it the appropriate access scopes.

Behind the scenes Azure AD (authorization server in OAuth-speak) is contacted and the script (client in OAuth-speak) passes a unique client id and client secret.  The client id and client secret are used to authenticate the application to Azure AD which then looks within its directory to determine what resources the application is authorized to access (scope in OAuth-speak).  An access token is then returned from Azure AD which will be used in the next step.

def obtain_accesstoken(tenantname,clientid,clientsecret,resource):
    auth_context = AuthenticationContext('https://login.microsoftonline.com/' +
        tenantname)
    token = auth_context.acquire_token_with_client_credentials(
        resource=resource,client_id=clientid,
        client_secret=clientsecret)
    return token

A properly formatted header is created and the access token is included. The function checks to see if the q_param parameter has a value and it if does it passes it as a dictionary object to the Python Requests library which includes the key values as query strings. The request is then made to the appropriate endpoint. If the response code is anything but 200 an exception is raised, written to the log, and the script terminates.  Assuming a 200 is received the Python JSON library is used to parse the response.  The JSON content is searched for an attribute of @odata.nextLink which indicates the results have been paged.  The function handles it by looping until there are no longer any paged results.  It additionally combines the paged results into a single JSON array to make it easier to work with moving forward.

def makeapirequest(endpoint,token,q_param=None):
 
    headers = {'Content-Type':'application/json', \
    'Authorization':'Bearer {0}'.format(token['accessToken'])}

    log.info('Making request to %s...',endpoint)
        
    if q_param != None:
        response = requests.get(endpoint,headers=headers,params=q_param)
        print(response.url)
    else:
        response = requests.get(endpoint,headers=headers)    
    if response.status_code == 200:
        json_data = json.loads(response.text)
            
        if '@odata.nextLink' in json_data.keys():
            log.info('Paged result returned...')
            record = makeapirequest(json_data['@odata.nextLink'],token)
            entries = len(record['value'])
            count = 0
            while count < entries:
                json_data['value'].append(record['value'][count])
                count += 1
        return(json_data)
    else:
        raise Exception('Request failed with ',response.status_code,' - ',
            response.text)

Lastly there is main.py which stitches the script together.  The first section adds the modules we’ve already covered in addition to the argparse library which is used to handle arguments added to the execution of the script.

import json
import requests
import logging
import time
import graphapi
import awsintegration
from argparse import ArgumentParser

A simple configuration for the logging module is setup instructing it to write to the msapiquery.log using a level of INFO and applies a standard format.

logging.basicConfig(filename='msapiquery.log', level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

This chunk of code creates an instance of the ArgumentParser class and configures two arguments.  The sourcefile argument is used to designate the JSON parameters file which contains all the necessary information.

The parameters file is then opened and processed.  Note that the S3 parameters are only pulled in if the –s3 switch was used.

parser = ArgumentParser()
parser.add_argument('sourcefile', type=str, help='JSON file with parameters')
parser.add_argument('--s3', help='Write results to S3 bucket',action='store_true')
args = parser.parse_args()

try:
    with open(args.sourcefile) as json_data:
        d = json.load(json_data)
        tenantname = d['parameters']['tenantname']
        resource = d['parameters']['resource']
        endpoint = d['parameters']['endpoint']
        filename = d['parameters']['filename']
        aws_region = d['parameters']['aws_region']
        q_param = d['parameters']['q_param']
        clientid_param = d['parameters']['clientid_param']
        clientsecret_param = d['parameters']['clientsecret_param']
        if args.s3:
            bucket = d['parameters']['bucket']
            prefix = d['parameters']['prefix']

Next up the get_parametersParameterStore function from the awsintegration module is executed twice.  Once to get the client id and once to get the client secret.  Note that the get_parameters method for Boto3 Systems Manager client could have been used to get both of the parameters in a single call, but I didn’t go that route.

    logging.info('Attempting to contact Parameter Store...')
    clientid = awsintegration.get_parametersParameterStore(clientid_param,aws_region)
    clientsecret = awsintegration.get_parametersParameterStore(clientsecret_param,aws_region)

In these next four lines the access token is obtained by calling the obtain_accesstoken function and the request to the MS Graph API is made using the makeapirequest function.

    logging.info('Attempting to obtain an access token...')
    token = graphapi.obtain_accesstoken(tenantname,clientid,clientsecret,resource)

    logging.info('Attempting to query %s ...',endpoint)
    data = graphapi.makeapirequest(endpoint,token,q_param)

This section creates a string representing the current day, month, and year and prepends the filename that was supplied in the parameters file.  The file is then opened using the with statement.  If you’re familiar with the using statement from C# the with statement is similar in that it ensures resources are cleaned up after being used.

Before the data is written to file, I remove the @odata.nextLink key if it’s present.  This is totally optional and just something I did to pretty up the results.  The data is then written to the file as raw text by using the Python JSON encoder/decoder.

    logging.info('Attempting to write results to a file...')
    timestr = time.strftime("%Y-%m-%d")
    filename = timestr + '-' + filename
    with open(filename,'w') as f:
        
        ## If the data was paged remove the @odata.nextLink key
        ## to clean up the data before writing it to a file

        if '@odata.nextLink' in data.keys():
            del data['@odata.nextLink']
        f.write(json.dumps(data))

Finally, if the s3 argument was passed when the script was run, the put_s3 method from the awsintegration module is run and the file is uploaded to S3.

    logging.info('Attempting to write results to %s S3 bucket...',bucket)
    if args.s3:
        awsintegration.put_s3(bucket,prefix,aws_region,filename)

Exceptions thrown anywhere in the script are captured here written to the log file.  I played around a lot with a few different ways of handling exceptions and everything was so interdependent that if there was a failure it was best for the script to stop altogether and inform the user.  Naftali Harris has an amazing blog that walks through the many different ways of handling exceptions in Python and the various advantages and disadvantages.  It’s a great read.

except Exception as e:
    logging.error('Exception thrown: %s',e)
    print('Error running script.  Review the log file for more details')

So that’s what the code is.  Let’s take a quick look at the parameters file below.  It’s very straight forward.  Keep in mind both the bucket and prefix parameters are only required when using the –s3 option.  Here are some details on the other options:

  • The tenantname attribute is the DNS name of the Azure AD tenant being queries.
  • The resource attribute specifies the resource the access token will be used for.  If you’re going to be hitting the MS Graph API, more than likely it will be https://graph.microsoft.com
  • The endpoint attribute specifies the endpoint the request is being made to including any query strings you plan on using
  • The clientid_param and clientsecret_param attributes are the AWS Systems Manager Parameter Store parameter names that hold the client id and client secret the script was provisioned from Azure AD
  • The q_param attribute is an array of key value pairs intended to story OData query strings
  • The aws_region attribute is the region the S3 bucket and parameter store data is stored in
  • The filename attribute is the name you want to set for the file the script will produce
{
    "parameters":{
        "tenantname": "mytenant.com",
        "resource": "https://graph.microsoft.com",
        "endpoint": "https://graph.microsoft.com/beta/auditLogs/signIns",
        "clientid_param":"myclient_id",
        "clientsecret_param":"myclient_secret",
        "q_param":{"$filter":"createdDateTime gt 2019-01-09"},
        "aws_region":"us-east-1",
        "filename":"sign_in_logs.json",
        "bucket":"mybucket",
        "prefix":"myprefix"
    }
}

Now that the script has been covered, let’s see it action.  First I’m going to demonstrate how it handles paging by querying the MS Graph API endpoint to list out the users in the directory.  I’m going to append the $select query parameter and set it to return just the user’s id to make the output more simple and set the $top query parameter to one to limit the results to one user per page.  The endpoint looks like this https://graph.microsoft.com/beta/users?$top=1&select=id.

I’ll be running the script from an instance of Cloud9.  The IAM user I’m using with AWS has appropriate permissions to the S3 bucket, KMS CMK, and parameters in the parameter store.  I’ve set each of the parameters in the parameters file to the appropriate values for the environment I’ve configured.  I’ll additionally be using the –s3 option.

 

run_script.png

Once the script is complete it’s time to look at the log file that was created.  As seen below each step in the script to aid with debugging if something were to fail.  The log also indicates the results were paged.

log

The output is nicely formatted JSON that could be further transformed or fed into something like Amazon Athena for further analysis (future post maybe?).

json.png

Cool right?  My original use case was sign-in logs so let’s take a glance that.  Here I’m going to use an endpoint of https://graph.microsoft.com/beta/auditLogs/signIns with a OData filter option of createdDateTime gt 2019-01-08 which will limit the data returned to today’s sign-ins.

In the logs we see the script was successfully executed and included the filter specified.

graphapi_log_sign.png

The output is the raw JSON of the sign-ins over the past 24 hours.  For your entertainment purposes I’ve included one of the malicious sign-ins that was captured.  I SO can’t wait to examine this stuff in a BI tool.

sign_in_json

Well that’s it folks.  It may be ugly, but it works!  This was a fun activity to undertake as a first stab at making something useful in Python.  I especially enjoyed the lack of documentation available on this integration.  It really made me dive deep and learn things I probably wouldn’t have if there were a billion of examples out there.

I’ve pushed the code to Github so feel free to muck around with it to your hearts content.

Using Python to Pull Data from MS Graph API – Part 1

Welcome to 2019 fellow geeks! I hope each of you had a wonderful holiday with friends and family.

It’s been a few months since my last post. As some of you may be aware I made a career move last September and took on a new role with a different organization. The first few months have been like drinking from multiple fire hoses at once and I’ve learned a ton. It’s been an amazing experience that I’m excited to continue in 2019.

One area I’ve been putting some focus in is learning the basics of Python. I’ve been a PowerShell guy (with a bit of C# thrown in there) for the past six years so diving into a new language was a welcome change. I picked up a few books on the language, watched a few videos, and it wasn’t clicking. At that point I decided it was time to jump into the deep end and come up with a use case to build out a script for. Thankfully I had one queued up that I had started in PowerShell.

Early last year my wife’s Office 365 account was hacked. Thankfully no real damage was done minus some spam email that was sent out. I went through the wonderful process of changing her passwords across her accounts, improving the complexity and length, getting her on-boarded with a password management service, and enabling Azure MFA (Multi-factor Authentication) on her Office 365 account and any additional services she was using that supported MFA options.  It was not fun.

Curious of what the logs would have shown, I had begun putting together a PowerShell script that was going to pull down the logs from Azure AD (Active Directory), extract the relevant data, and export it CSV (comma-separate values) where I could play around with it in whatever analytics tool I could get my hands on. Unfortunately life happened and I never had a chance to finish the script or play with the data. This would be my use case for my first Python script.

Azure AD offers a few different types of logs which Microsoft divides into a security pillar and an activity pillar. For my use case I was interested in looking at the reports in the Activity pillar, specifically the Sign-ins report. This report is available for tenants with an Azure AD Premium P1 or P2 subscription (I added P2 subscriptions to our family accounts last year).  The sign-in logs have a retention period of 30 days and are available either through the Azure Portal or programmatically through the MS Graph API (Application Programming Interface).

My primary goals were to create as much reusable code as possible and experiment with as many APIs/SDKs (Software Development Kits) as I could.  This was accomplished by breaking the code into various reusable modules and leveraging AWS (Amazon Web Services) services for secure storage of Azure AD application credentials and cloud-based storage of the exported data.  Going this route forced me to use the MS Graph API, Microsoft’s Azure Active Directory Library for Python (or ADAL for short), and Amazon’s Boto3 Python SDK.

On the AWS side I used AWS Systems Manager Parameter Store to store the Azure AD credentials as secure strings encrypted with a AWS KMS (Key Management Service) customer-managed customer master key (CMK).  For cloud storage of the log files I used Amazon S3.

Lastly I needed a development environment and source control.  For about a day I simply used Sublime Text on my Mac and saved the file to a personal cloud storage account.  This was obviously not a great idea so I decided to finally get my GitHub repository up and running.  Additionally I moved over to using AWS’s Cloud9 for my IDE (integrated development environment).   Cloud9 has the wonderful perk of being web based and has the capability of creating temporary credentials that can do most of what my AWS IAM user can do.  This made it simple to handle permissions to the various resources I was using.

Once the instance of Cloud9 was spun up I needed to set the environment up for Python 3 and add the necessary libraries.  The AMI (Amazon Machine Image) used by the Cloud9 service to provision new instances includes both Python 2.7 and Python 3.6.  This fact matters when adding the ADAL and Boto3 modules via pip because if you simply run a pip install module_name it will be installed for Python 2.7.  Instead you’ll want to execute the command python3 -m pip install module_name which ensures that the two modules are installed in the appropriate location.

In my next post I’ll walk through and demonstrate the script.

Have a great week!