Azure VWAN and Private Endpoint Traffic Inspection – Findings

Azure VWAN and Private Endpoint Traffic Inspection – Findings

Today I’m taking a break from my AI series to cover an interesting topic that came up at a customer.

My customer base exists within the heavily regulated financial services industry which means a strong security posture. Many of these customers have requirements for inspection of network traffic which includes traffic between devices within their internal network space. This requirement gets interesting when talking inspection of traffic destined for a service behind a Private Endpoint. I’ve posted extensively on Private Endpoints on this blog, including how to perform traffic inspection in a traditional hub and and spoke network architecture. One area I hadn’t yet delved into was how to achieve this using Azure Virtual WAN (VWAN).

VWAN is Microsoft’s attempt to iterate on the hub and spoke networking architecture and make the management and setup of the networking more turnkey. Achieving that goal has been an uphill battle for VWAN with it historically requiring very complex architectures to achieve the network controls regulated industries strive for. There has been solid progress over the past few months with routing intent and support for additional third-party next generation firewalls running in the VWAN hub such as Palo Alto becoming available. These improvements have opened the doors for regulated customers to explore VWAN Secure Hubs as a substitute for a traditional hub and spoke. This brings us to our topic: How do VWAN Secure Hubs work when there is a requirement to inspect traffic destined for a Private Endpoint?

My first inclination when pondering this question was that it would work in the same way a traditional hub and spoke works. In past posts I’ve covered that pattern. You can take a look at this repository I’ve put together which walks through the protocol flows in detail if you’re curious. The short of it is inspection requires enabling network policies for the subnet the Private Endpoints are deployed to and SNATing at the firewall. The SNATing is required at the firewall because Private Endpoints do not obey user-defined routes defined in a route table. Without the SNAT you get asymmetric routing that becomes a nightmare of troubleshooting to identify. Making it even more confusing, some services like Azure Storage will magically keep traffic symmetric as I’ve covered in past posts. Best practice for traditional hub and spoke is SNATing for firewall inspection with Private Endpoints.

Hub and Spoke Firewall Inspection

My first stop was to read through the Microsoft documentation. I came across this article first which walks through traffic inspection with Azure Firewall with a VWAN Secure Hub. As expected, the article states that SNAT is required (yes I’m aware of the exception for Azure Firewall Application Rules, but that is the exception and not the rule and very few in my customer space use Azure Firewall). Ok great, this aligns with my understanding. But wait, this article about Secure Hub with routing intent does not mention SNAT at all. So is SNAT required or not?

When public documentation isn’t consistent (which of course NEVER happens) it’s time to lab and see what we see. I threw together a single region VWAN Secure Hub with Azure Firewall, enabled routing intent for both Internet and Private traffic, and connected my home lab over a S2S VPN. I created then Private Endpoint for a Key Vault and Azure SQL resource. Per the latter article mentioned above, I enabled Private Endpoint Network Policies for the snet-svc subnet in the spoke virtual network. Finally, I created a single Network Rule allowing traffic for 443 and 1433 from my lab to the spoke virtual network. This ensured I didn’t run into the transparent proxy aspect of Application Rules throwing off my findings.

Lab used

If you were doing this in the “real world” you’d setup a packet capture on the firewall and validate you see both sides of the conversation. If you’ve used Azure Firewall, you’re well aware it does not yet support packet captures making this impossible. Thankfully, Microsoft has recently introduced Azure Firewall Structure Firewall Logs which include a log called Azure Firewall Flow Trace Log. This log will show you the gooey details of the TCP conversation and helps to fill the gap of troubleshooting asymmetric traffic while Microsoft works on offering a packet capture capability (a man can dream, can’t he?).

While the rest of the Azure Firewall Structured Logs need nothing special to be configured, the Flow Trace Logs do (likely because as of 8/20/2023 they’re still in public preview). You need to follow the instructions located within this document. Make sure you give it a solid 30 minutes of completing the steps to enable the feature before you enable the log through the diagnostic settings of the Azure Firewall. Also, do not leave this running. Beyond the performance hit that can occur because of how chatty this log is, you could also be in a world of hurt for a big Log Analytics Workspace bill if you leave it running.

Once I had my lab deployed and the Flow Trace Logs working, I next went ahead testing using the Test-NetConnection PowerShell cmdlet from a Windows machine in my home lab. This is a wonderful cmdlet if you need something built-in to Windows to do a TCP Ping.

Testing Azure SQL via Private Endpoint

In the above image you can see that the TCP Ping to port 1433 of an Azure SQL database behind a Private Endpoint was successful. Review of the Azure Firewall Network Logs showed my Network Rule firing which tells me the TCP SYN at least passed through providing proof that Private Endpoint Network Policies were successfully affecting the traffic to the Private Endpoint.

What about return traffic? For that I went to the Flow Trace Logs. Oddly enough, the firewall was also receiving the SYN-ACK back from the Private Endpoint all without SNAT being configured. I repeated the test for a Azure Key Vault behind a Private Endpoint and observed the same behavior (and I’ve confirmed in the Azure Key Vault needs SNAT for return traffic in the past in a standard hub and spoke).

Azure Firewall Flow Trace Log

So is SNAT required or not? You’re likely expecting me to answer yes or no. Well today I’m going to surprise you with “I don’t know”. While testing with these two services in this architecture seemed to indicate it was not, I’ve circulated these findings within Microsoft and the recommendation to SNAT to ensure flow symmetry remains. As I’ve documented in prior posts, not all Azure services behave the same way with traffic symmetry and Azure Private Endpoints (Azure Storage for example) and for consistent purposes you should be SNATing. Do not rely on your testing of a few services in a very specific architecture as being gospel. You should be following the practices outlined in the documentation.

I feel like I’m ending this blog Sopranos-style with fade to black, but sometimes even tech has mystery. In this post you got a taste of how Flow Trace Logs can help troubleshoot traffic symmetry issues when using Azure Firewall and you learned that not all things in the cloud work the way you expect them to work. Sometimes that is intentional and sometimes it’s not intentional. When you run into this type of situation where behavior you’re observing doesn’t match documentation, it’s always best to do what is documented (in this case you should be doing SNAT). Maybe it’s something you’re doing wrong (this is me we’re talking about) or maybe you don’t have all the data (I tested 2 of 100+ services). If you go with what you experience, you risk that undocumented behavior being changed or corrected and then being in a heap of trouble in the middle of the night (oh the examples I could give of this across my time at cloud providers over a glass of Titos).

Well folks, that wraps things up. TLDR; SNAT until the documentation says otherwise regardless of what you experience.

Thanks!

Azure Firewall and TLS Inspection

Azure Firewall and TLS Inspection

Welcome back fellow geeks!

I recently had a customer that was interested in staying as purely cloud native as possible, which included any centralized firewall that would be in use. Microsoft has offered up Azure Firewall for a while now and it is a great solution if you’re looking for a very basic fully-managed firewall. Here are some of the neater features of the solution:

Unfortunately this basic feature set rarely satisfied the more regulated customer base I tend to work with. Many of these customers went with the full featured security appliances such as those offered by Palo Alto, Fortigate, and the like. One of the largest gaps in Azure Firewall when compared to the 3rd party vendors was the lack DPI (deep packet inspection) and IDS (intrusion detection system) / IPS (intrusion prevention system) capabilities. Microsoft heard the feedback from its customers and back in February of 2021 made the Azure Firewall Premium SKU available in public preview with a collection of features such as TLS (transport layer security) Inspection, IDPS (intrusion detection prevention system), URL filtering, and improved web category filtering. The addition of these capabilities now has made Azure Firewall a much more appealing cloud-native solution.

I had yet to spend any significant time experimenting with the Premium SKU (I make it a habit to not invest a ton of time into preview features). However, this customer gave me the opportunity to dive into the TLS inspection and IDPS capabilities. These capabilities will be the subject of this post and I’ll spend some time describing the architectural pattern I built out and experimented with.

This particular customer had a requirement to perform DPI and IDPS on incoming web-based traffic from the Internet. I asked the customer to provide the control set they needed to satisfy such that we could map those controls to the technical controls available in Azure-native services. The hope was, since this was web-based traffic only and used across multiple regions, we might be able to satisfy all the controls via a WAF (web application firewall) such as Azure Front Door and supplement with layer 7 load balancing with Azure Application Gateway within a given region. The rest of the traffic, non-web, would be delivered to a firewall running in parallel. This pattern is becoming more common place as the WAFs grow in functionality and feature set.

WAF-Only Pattern

Unfortunately the above pattern was not an option for the customer because they wanted to maintain a centralized funnel for all traffic via a firewall. This is not an uncommon ask. This meant I had to get the traffic coming in from the WAF to funnel through Azure Firewall. For this pattern to work end to end I would need layer 7 load balancing so that meant I needed an Application Gateway as well. The question was do I place the Azure Firewall before or after the Application Gateway? For the answer to this question I went to the Microsoft documentation. Typically the public documentation leaves a lot to be desired when it comes to identifying the benefits and considerations of a particular pattern (oh how I long for the days of Technet-quality documentation), however the documentation around these patterns is stellar.

After quickly reading the benefits and considerations about the two patterns, the decision looked like it was made for me. The pattern where the firewall is placed after the application gateway aligned with my customer’s use case. It specifically covered TLS inspection and IDPS through Azure Firewall Premium. Curious as to why this TLS inspection at Azure Firewall wasn’t mentioned in the other use case where Azure Firewall is placed in front of Application Gateway, I went down a rabbit hole.

My first stop was the public documentation for the Azure Firewall Premium SKU. Since the feature is still public preview there are a fair amount of limitations but none of the limitations that weren’t planned to be fixed by GA (general availability) looked like show stoppers. However, in the section for TLS Inspection, I noticed this blurb, “Azure Firewall Premium terminates outbound and east-west TLS connections.” I reached out to some internal communities within Microsoft and confirmed that “at this time” Azure Firewall isn’t capable of performing TLS Inspection on traffic coming in the public interface. This limitation meant that I had to get the traffic received from the WAF over to the internal interface and the best way to do this was to intake it from the WAF through the Application Gateway. This would be the pattern I’d experiment with.

To keep things simple, I focused on a single region and didn’t include a WAF. Load balancing across regions could be done with the customer’s 3rd party WAF where the WAF would resolve to the appropriate regional Application Gateway v2 instance public IP depending on the load balancing pattern (such as geo-location) the customer was using. Once the traffic is received from the WAF, the Application Gateway terminates the TLS session so that it can inspect the URL and host headers and direct the traffic to the appropriate backend, which in this case was the single web server running IIS (Internet Information Services) in a peered spoke virtual network.

Lab environment

To ensure the traffic leaving the Application Gateway funnels through the Azure Firewall instance, I attached a route table to the Application Gateway. This route table was configured with BGP propagation disabled (to ensure a default route couldn’t be accidentally propagated in) and with a single UDR (user-defined route) which contained the spoke’s virtual network CIDR (Classless Inter-Domain Routing) block with a next hop of the Azure Firewall private IP address. Since UDRs take precedence over system routes, this route would invalidate the system route for the peering. On the web server subnet in the spoke I had a similar route table with a single UDR which contained the transit virtual network CIDR block with a next hop of the Azure Firewall private IP address. This ensured that any communication between the two would flow through the Azure Firewall.

Spoke web server subnet’s route table

To ensure DNS would resolve, I created an A record in public DNS for sample1.geekintheweeds.com pointing to the public IP address of the Application Gateway. Within Azure, I built a Windows server, installed the DNS service, and created a forward lookup zone for geekintheweeds.com with an A record named sample1 pointing to the web server. Within each virtual network I configured the Windows Server IP address in the DNS Server settings (note that if you do this after you provision Application Gateway, you’ll need to stop and start it). Within the Azure Firewall, I set it to use the server as its DNS server.

DNS Flows

Now that the necessary plumbing was in place I needed to put in the appropriate certificates for the Application Gateway, Azure Firewall, and the Web Server. This is where this setup can get ugly. Since this was a lab, I generated all of my certificates from a private CA (certificate authority) I have running in my home lab. Since these CAs are only used for testing the CA issues all certificates without a CDP (certificate revocation distribution point) to keep things simple by avoiding requiring any of those network flows for revocation check lookups. In a production environment you’d want to issue the certificates for the Application Gateway from a trusted public CA so you don’t have to worry about CDPs/OCSP (online certificate status protocol) exposing the endpoints for these flows. For Azure Firewall and the web server you’d be fine using certificates issued by a private CA as long as you ensured appropriate validation endpoints were available.

The Application Gateway and web server will use your standard web server certificate. The Azure Firewall is a different story. To support TLS Interception you’ll need to provide it with an intermediary certificate. This type of certificate allows Azure Firewall to generate certificates on the fly to impersonate the services it’s intercepting traffic for. This link explains the finer details of the requirements. Also note that the certificate needs to be imported to an instance of Key Vault which Azure Firewall accesses using a user-assigned managed identity.

Once the Application Gateway was configured in a similar manner as outlined here, I was good to test. Accessing the server from a machine running in my home lab successfully displayed the standard IIS website as hoped! When I viewed the Azure Firewall logs the full URL being accessed by the user was visible proving TLS inspection was working as expected. Success!

Log entry from Azure Firewall showing full URL

The patterns works but there are a number of considerations.

The biggest consideration being Azure Firewall Premium is still in public preview. Regardless of what you may hear from those within Microsoft or outside of Microsoft, DO NOT USE PUBLIC PREVIEW FEATURES IN PRODUCTION. I’d go as far as cautioning against even using them in planned designs. By the time these new products or features make it to GA, they can and often do change, sometimes for the better and sometimes for the worst. If you choose to use these products or features in an upcoming design, make sure to have a plan B if the product doesn’t hit GA or hits GA without the features you need. Remember that Microsoft’s targeted release dates are often moving targets that accelerate or decelerate depending on the feedback from public preview. Unless you have a contractual agreement with a vendor to deliver upon a specific date and there are real penalties to the vendor for not doing that, you should have a fully vetted and tested plan B ready to go.

Outside of my ranting about usage of public preview products and features, here are some other considerations with this pattern:

  • Challenges with observability
  • Operational overhead of certificate management
  • Possible latency issues depending on latency requirements and traffic patterns

In this design the Application Gateway will be SNATing the traffic it receives from the Internet users. To understand the session end to end and correlate the logs from the WAF to the Application Gateway, through the Firewall, to the web server, you’ll need to ensure you’re capturing the x-forwarded-for header and using it to identify the user’s original source IP. This will definitely add complexity to the observability of the environment. Tack on the many mediation points, and identifying where traffic is getting rejected (WAF, App Gateway, firewall, NSG, local machine firewall) will require a strong logging and correlation system.

This pattern is going to require at least 3 separate certificates which will likely be a mix of certificates issued by both public CAs and private CAs. Certificate lifecycle management is a significantly challenging operational task and is often the cause of service outages. If you opt to use a pattern such as this, you’ll need to ensure your operational monitoring and alerting processes around certificate lifecycle management are solid. In addition, you will also need to manage revocation network flows. In a past life, these were the flows I observed that would often bite organizations.

Lastly, this pattern involves a lot of hops where the traffic is being decrypted and re-encrypted. This requires compute time which could add to latency. These latency issues could be impactful depending on the latency requirements of the application and what type and volume of data is flowing between the user and web server.

I have to admit I enjoyed labbing this on out. These days I usually spend a majority of my time in governance conversations focusing on people and process. Getting back into the technology and spending some time playing with the Azure Firewall Premium SKU and Application Gateway was a great learning experience. It will be interesting to see over time how well it will compete against the behemoths of the industry such as Palo Alto.

Over the next week I’ll be making some small tweaks to this design to see whether I can stick the Key Vault behind a Private Endpoint (documentation is unclear as to whether or not this is supported) and messing with the logs provided by both the Azure Firewall and Application Gateway to see how challenging correlation of sessions is.

I hope you have a great long weekend and see you next post!