2019 is more than halfway over and it feels like it has happened in a flash. It’s been an awesome year with tons of change and even more learning. I started the year neck deep in AWS and began transitioning into Azure back in April when I joined on with Microsoft. Having the opportunity to explore both clouds and learn the capabilities of each offering has been an amazing experience that I’m incredibly thankful for. As I’ve tried to do for the past 8 years, I’m going to share some of those learning with you. Today we’re going to explore one of the capabilities that differentiates Azure from its competition.
One of the key takeaways I’ve had from my experiences with AWS and Microsoft is enterprises have become multicloud. Workloads are quickly being spread out among public and private clouds. While the business benefits greatly from a multicloud approach where workloads can go to the most appropriate environment where the cost, risks, and time tables best suit it, it presents a major challenge to the technical orchestration behind the scenes. With different APIs (application programmatic interface), varying levels of compliance, great and not so great capabilities around monitoring and alerting, and a major industry gap in multicloud skills sets, it can become quite a headache to successfully execute this approach.
One area Microsoft Azure differentiates itself is its ability to easy the challenge of monitoring and alerting in a multicloud environment. Azure Monitor is one of the key products behind this capability. With this post I’m going to demonstrate Azure Monitor’s capabilities in this realm by walking you through a pattern of delivering, visualizing, and analyzing log data collected from AWS. The pattern I’ll be demonstrating is reusable for most any cloud (and potentially on-premises) offering. Now sit back, put your geek hat on, and let’s dive in.
First I want to briefly talk about what Azure Monitor is? Azure Monitor is a solution which brings together a collection of tools that can be used to collect and analyze the large abundance of telemetry available today. This telemetry could be metrics in regards to a virtual machine’s performance or audit logs for Azure Active Directory. The product team has put together the excellent diagram below which explains the architecture of the solution.
As you can see from the inputs on the left, Azure Monitor is capable of collecting and analyzing data from a variety of sources. You’ll find plenty of documentation the product team has made publicly available on the five gray items, so I’m going to instead focus on custom sources.
For those of you who have been playing in the AWS pool, you can think of Azure Monitor as something similar (but much more robust) to CloudWatch Metrics and CloudWatch Logs. I know, I know, you’re thinking I’ve drank the Microsft Kool-Aid.
While I do love to reminisce about cold glasses of Kool-Aid on hot summers in the 1980s, I’ll opt to instead demonstrate it in action and let you decide for yourself. To do this I’ll be leveraging the new API Microsoft introduced. The Azure Monitor HTTP Data Collector API was introduced a few months back and provides the capability of delivering log data to Azure where it can be analyzed by Azure Monitor.
With Azure Monitor logs are stored in an Azure resource called a Log Analytics Workspace. For you AWS folk, you can think of a Log Analytics Workspace as something similar to CloudWatch Log Groups where the data stored in a logical boundary where the data shares a retention and authorization boundary. Logs are sent to the API in JSON format and are placed in the Log Analytics Workspace you specify. A high level diagram of the flow can be seen below.
So now that you have a high level understanding of what Azure Monitor is, what it can do, and how the new API works, let’s talk about the demonstration.
If you’ve used AWS you’re very familiar with the capabilities CloudWatch Metrics Dashboards and the basic query language available to analyze CloudWatch Logs. To perform more complex queries and to create deeper visualizations, third-party solutions are often used such as ElasticSearch and Kibana. While these solutions work, they can be complex to implement and can create more operational overhead.
When a peer informed me about the new API a few weeks back, I was excited to try it out. I had just started to use Azure Monitor to put together some dashboards for my personal Office 365 and Azure subscriptions and was loving the power and simplicity of the analytics component of the solution. The new API opened up some neat opportunities to pipe logging data from AWS into Azure to create a single dashboard I could reference for both clouds. This became my use case and demonstration of the pattern of delivering logs from a third party to Azure Monitor with some simple Python code.
The logs I chose to deliver to the API were logs containing information surrounding the usage of AWS access ids and keys. I had previously put together some code to pull this data and write it to an S3 bucket.
Let’s take a look at the design of the solution. I had a few goals I wanted to make sure to hit if possible. My first goal was to keep the code simple. That mean limiting the usage of third-party modules and avoid over complicating the implementation.
My second goal was to limit the usage of static credentials. If I ran the code in Azure, I’d need to setup an AWS IAM User and provision an access id and secret key. While I’m aware of the workaround to use SAML authentication, I’m not a fan because in my personal opinion, it’s using SAML in such a way you are trying to hammer in a square peg in a round hole. Sure you can do it, but you really shouldn’t unless you’re out of options. Additionally, the solution requires some fairly sensitive permissions in AWS such as IAM:ListAccessKeys so the risk of the credentials being compromised could be significant. Given the risks and constraints of authentication methods to the AWS API, I opted to run my code as a Lambda and follow AWS best practices and assign the Lambda an IAM role.
On the Azure side, the Azure Monitor API for log delivery requires authentication using the Workspace ID and Workspace key. Ideally these would be encrypted and stored in AWS Secrets Manager or as a secure parameter in Parameter Store, but I decided to go the easy route and store them as environment variables for the Lambda and to encrypt them with AWS KMS. This cut back on the code and made the CloudFormation templates easier to put together.
With the decisions made the resulting design is pictured above.
I’m going to end the post here and save the dive into implementation and code for the next post. In the meantime, take a read through the Azure Monitor documentation and familiarize yourself with the basics. I’ve also put the whole solution up on Github if you’d like to follow along for next post.
See you next post!