Azure Firewall and TLS Inspection

Azure Firewall and TLS Inspection

Welcome back fellow geeks!

I recently had a customer that was interested in staying as purely cloud native as possible, which included any centralized firewall that would be in use. Microsoft has offered up Azure Firewall for a while now and it is a great solution if you’re looking for a very basic fully-managed firewall. Here are some of the neater features of the solution:

Unfortunately this basic feature set rarely satisfied the more regulated customer base I tend to work with. Many of these customers went with the full featured security appliances such as those offered by Palo Alto, Fortigate, and the like. One of the largest gaps in Azure Firewall when compared to the 3rd party vendors was the lack DPI (deep packet inspection) and IDS (intrusion detection system) / IPS (intrusion prevention system) capabilities. Microsoft heard the feedback from its customers and back in February of 2021 made the Azure Firewall Premium SKU available in public preview with a collection of features such as TLS (transport layer security) Inspection, IDPS (intrusion detection prevention system), URL filtering, and improved web category filtering. The addition of these capabilities now has made Azure Firewall a much more appealing cloud-native solution.

I had yet to spend any significant time experimenting with the Premium SKU (I make it a habit to not invest a ton of time into preview features). However, this customer gave me the opportunity to dive into the TLS inspection and IDPS capabilities. These capabilities will be the subject of this post and I’ll spend some time describing the architectural pattern I built out and experimented with.

This particular customer had a requirement to perform DPI and IDPS on incoming web-based traffic from the Internet. I asked the customer to provide the control set they needed to satisfy such that we could map those controls to the technical controls available in Azure-native services. The hope was, since this was web-based traffic only and used across multiple regions, we might be able to satisfy all the controls via a WAF (web application firewall) such as Azure Front Door and supplement with layer 7 load balancing with Azure Application Gateway within a given region. The rest of the traffic, non-web, would be delivered to a firewall running in parallel. This pattern is becoming more common place as the WAFs grow in functionality and feature set.

WAF-Only Pattern

Unfortunately the above pattern was not an option for the customer because they wanted to maintain a centralized funnel for all traffic via a firewall. This is not an uncommon ask. This meant I had to get the traffic coming in from the WAF to funnel through Azure Firewall. For this pattern to work end to end I would need layer 7 load balancing so that meant I needed an Application Gateway as well. The question was do I place the Azure Firewall before or after the Application Gateway? For the answer to this question I went to the Microsoft documentation. Typically the public documentation leaves a lot to be desired when it comes to identifying the benefits and considerations of a particular pattern (oh how I long for the days of Technet-quality documentation), however the documentation around these patterns is stellar.

After quickly reading the benefits and considerations about the two patterns, the decision looked like it was made for me. The pattern where the firewall is placed after the application gateway aligned with my customer’s use case. It specifically covered TLS inspection and IDPS through Azure Firewall Premium. Curious as to why this TLS inspection at Azure Firewall wasn’t mentioned in the other use case where Azure Firewall is placed in front of Application Gateway, I went down a rabbit hole.

My first stop was the public documentation for the Azure Firewall Premium SKU. Since the feature is still public preview there are a fair amount of limitations but none of the limitations that weren’t planned to be fixed by GA (general availability) looked like show stoppers. However, in the section for TLS Inspection, I noticed this blurb, “Azure Firewall Premium terminates outbound and east-west TLS connections.” I reached out to some internal communities within Microsoft and confirmed that “at this time” Azure Firewall isn’t capable of performing TLS Inspection on traffic coming in the public interface. This limitation meant that I had to get the traffic received from the WAF over to the internal interface and the best way to do this was to intake it from the WAF through the Application Gateway. This would be the pattern I’d experiment with.

To keep things simple, I focused on a single region and didn’t include a WAF. Load balancing across regions could be done with the customer’s 3rd party WAF where the WAF would resolve to the appropriate regional Application Gateway v2 instance public IP depending on the load balancing pattern (such as geo-location) the customer was using. Once the traffic is received from the WAF, the Application Gateway terminates the TLS session so that it can inspect the URL and host headers and direct the traffic to the appropriate backend, which in this case was the single web server running IIS (Internet Information Services) in a peered spoke virtual network.

Lab environment

To ensure the traffic leaving the Application Gateway funnels through the Azure Firewall instance, I attached a route table to the Application Gateway. This route table was configured with BGP propagation disabled (to ensure a default route couldn’t be accidentally propagated in) and with a single UDR (user-defined route) which contained the spoke’s virtual network CIDR (Classless Inter-Domain Routing) block with a next hop of the Azure Firewall private IP address. Since UDRs take precedence over system routes, this route would invalidate the system route for the peering. On the web server subnet in the spoke I had a similar route table with a single UDR which contained the transit virtual network CIDR block with a next hop of the Azure Firewall private IP address. This ensured that any communication between the two would flow through the Azure Firewall.

Spoke web server subnet’s route table

To ensure DNS would resolve, I created an A record in public DNS for sample1.geekintheweeds.com pointing to the public IP address of the Application Gateway. Within Azure, I built a Windows server, installed the DNS service, and created a forward lookup zone for geekintheweeds.com with an A record named sample1 pointing to the web server. Within each virtual network I configured the Windows Server IP address in the DNS Server settings (note that if you do this after you provision Application Gateway, you’ll need to stop and start it). Within the Azure Firewall, I set it to use the server as its DNS server.

DNS Flows

Now that the necessary plumbing was in place I needed to put in the appropriate certificates for the Application Gateway, Azure Firewall, and the Web Server. This is where this setup can get ugly. Since this was a lab, I generated all of my certificates from a private CA (certificate authority) I have running in my home lab. Since these CAs are only used for testing the CA issues all certificates without a CDP (certificate revocation distribution point) to keep things simple by avoiding requiring any of those network flows for revocation check lookups. In a production environment you’d want to issue the certificates for the Application Gateway from a trusted public CA so you don’t have to worry about CDPs/OCSP (online certificate status protocol) exposing the endpoints for these flows. For Azure Firewall and the web server you’d be fine using certificates issued by a private CA as long as you ensured appropriate validation endpoints were available.

The Application Gateway and web server will use your standard web server certificate. The Azure Firewall is a different story. To support TLS Interception you’ll need to provide it with an intermediary certificate. This type of certificate allows Azure Firewall to generate certificates on the fly to impersonate the services it’s intercepting traffic for. This link explains the finer details of the requirements. Also note that the certificate needs to be imported to an instance of Key Vault which Azure Firewall accesses using a user-assigned managed identity.

Once the Application Gateway was configured in a similar manner as outlined here, I was good to test. Accessing the server from a machine running in my home lab successfully displayed the standard IIS website as hoped! When I viewed the Azure Firewall logs the full URL being accessed by the user was visible proving TLS inspection was working as expected. Success!

Log entry from Azure Firewall showing full URL

The patterns works but there are a number of considerations.

The biggest consideration being Azure Firewall Premium is still in public preview. Regardless of what you may hear from those within Microsoft or outside of Microsoft, DO NOT USE PUBLIC PREVIEW FEATURES IN PRODUCTION. I’d go as far as cautioning against even using them in planned designs. By the time these new products or features make it to GA, they can and often do change, sometimes for the better and sometimes for the worst. If you choose to use these products or features in an upcoming design, make sure to have a plan B if the product doesn’t hit GA or hits GA without the features you need. Remember that Microsoft’s targeted release dates are often moving targets that accelerate or decelerate depending on the feedback from public preview. Unless you have a contractual agreement with a vendor to deliver upon a specific date and there are real penalties to the vendor for not doing that, you should have a fully vetted and tested plan B ready to go.

Outside of my ranting about usage of public preview products and features, here are some other considerations with this pattern:

  • Challenges with observability
  • Operational overhead of certificate management
  • Possible latency issues depending on latency requirements and traffic patterns

In this design the Application Gateway will be SNATing the traffic it receives from the Internet users. To understand the session end to end and correlate the logs from the WAF to the Application Gateway, through the Firewall, to the web server, you’ll need to ensure you’re capturing the x-forwarded-for header and using it to identify the user’s original source IP. This will definitely add complexity to the observability of the environment. Tack on the many mediation points, and identifying where traffic is getting rejected (WAF, App Gateway, firewall, NSG, local machine firewall) will require a strong logging and correlation system.

This pattern is going to require at least 3 separate certificates which will likely be a mix of certificates issued by both public CAs and private CAs. Certificate lifecycle management is a significantly challenging operational task and is often the cause of service outages. If you opt to use a pattern such as this, you’ll need to ensure your operational monitoring and alerting processes around certificate lifecycle management are solid. In addition, you will also need to manage revocation network flows. In a past life, these were the flows I observed that would often bite organizations.

Lastly, this pattern involves a lot of hops where the traffic is being decrypted and re-encrypted. This requires compute time which could add to latency. These latency issues could be impactful depending on the latency requirements of the application and what type and volume of data is flowing between the user and web server.

I have to admit I enjoyed labbing this on out. These days I usually spend a majority of my time in governance conversations focusing on people and process. Getting back into the technology and spending some time playing with the Azure Firewall Premium SKU and Application Gateway was a great learning experience. It will be interesting to see over time how well it will compete against the behemoths of the industry such as Palo Alto.

Over the next week I’ll be making some small tweaks to this design to see whether I can stick the Key Vault behind a Private Endpoint (documentation is unclear as to whether or not this is supported) and messing with the logs provided by both the Azure Firewall and Application Gateway to see how challenging correlation of sessions is.

I hope you have a great long weekend and see you next post!

What If… Volume 2

Hi there folks!

I’ve been busy lately buried in learning and practicing Kubernetes in preparation for the Certified Kubernetes Administrator exam. Tonight I’m taking a break to bring you another entry into the “What If” series I started a few months back.

Let’s get right to it.

What if I need to access a Private Endpoint in a subscription associated with a different Azure AD tenant and I have an existing Azure Private DNS Zone already?

I’ve been helping a good friend who recently joined Microsoft to support his customer as he gets up to speed on the Azure platform. This customer consists of two very large organizations which have a high degree of independence. Each of these organizations have their own Azure AD tenant and their own Azure footprint. One organization is further along in their cloud journey than the other.

Organization A (new to Azure) needed to consume some data that existed in an Azure SQL database in an Azure subscription associated with Organization B’s tenant. Both organizations have strict security and compliance requirements so they are heavy users of Azure PrivateLink Endpoints. A site-to-site VPN (virtual private network) connection was established between the two organizations to facilitate network communication between the Azure environments.

Customer Environment

The customer environment looked similar to the above where a machine on-premises in Organization A needed to access the Azure SQL database in Organization B. If you look closely, you probably see the problem already. From a DNS perspective, we have two Azure Private DNS Zones for privatelink.database.windows.net. This means we have two authorities for the same zone.

My peer and I went back on forth with a few different solutions. One solution seemed obvious in that organization A would manually create an A record in their Azure Private DNS zone pointing to IP of the PrivateLink Endpoint in Organization B. Since the organizations had connectivity between the two environments, this would technically work. The challenge with this pattern is it would introduce a potential bottleneck depending on the size of the VPN pipe. It could also lead to egress costs for Organization A depending on how the VPN connection was implemented.

The other option we came up with was to create a Private Endpoint in Organization A’s Azure subscription which would be associated with the Azure SQL instance running in Organization B’s Azure subscription. This would avoid any egress costs, we wouldn’t be introducing a potential bottleneck, and we’d avoid the additional operational head of having to manually manage the A record in Organization A’s Azure Private DNS Zone. Neither of us had done this before and while it seemed to be possible based on Microsoft’s documentation, the how was a bit lacking when talking PaaS services.

To test this I used two separate personal tenants I keep to test scenarios that aren’t feasible to test with internal resources. My goal was to build an architecture like the below.

Target architecture

So was it possible? Why yes it was, and an added bonus I’m going to tell you how to do it.

When you create a Private Endpoint through the Azure Portal, there is a Connection Method radio button seen below. If you’re creating the Private Endpoint for a resource within the existing tenant you can choose the Connect to an Azure resource in my directory option and you get a handy guided selection tool. If you want to connect to a resource outside your tenant, you instead have to select the Connect to an Azure resource by resource ID or alias. In this field you would end the full resource ID of the resource you’re creating the Private Endpoint for, which in this case is the Azure SQL server resource id. You’ll be prompted to enter the sub-resource which for Azure SQL is SqlServer. Proceed to create the Private Endpoint.

Private Endpoint Creation

After the Private Endpoint has been created you’ll observe it has a Connection status of Pending. This is part of the approval workflow where someone with control over the resource in the destination tenant needs to approve of the connection to the Azure SQL server.

Private Endpoint in pending status

If you jump over to the other resource in the target tenant and select the Private endpoint connections menu option you’ll see there is a pending connection that needs approval along with a message from the requestor.

Private Endpoint request

Select the endpoint to approve and click the approve button. At that point the Private Endpoint in the requestor tenant and you’ll see it has been approved and is ready for use.

This was a fun little problem to work through. I was always under the assumption this would work, the documentation said it would work, but I’m a trust but verify type of person so I wanted to see and experience it for myself.

I hope you enjoyed the post and learned something new. Now back to practicing Kubernetes labs!

Interesting behaviors with Private Endpoints

Interesting behaviors with Private Endpoints

Hi folks!

Working for and with organizations in highly regulated industries like federal and state governments and commercial banks often necessitates diving REALLY deep into products and technologies. This means peeling back the layers of the onion most people do not. The reason this pops up is because these organizations tend to have extremely complex environments due the length of time the organization has existed and the strict laws and regulations they must abide by. This is probably the reason why I’ve always gravitated towards these industries.

I recently ran into an interesting use case where that willingness to dive deep was needed.

A customer I was working with was wrapping up its Azure landing zone deployment and was beginning to deploy its initial workloads. A number of these workloads used Microsoft Azure PaaS (platform-as-a-service) services such as Azure Storage and Azure Key Vault. The customer had made the wise choice to consume the services through Azure Private Endpoints. I’m not going to go into detail on the basics of Azure Private Endpoints. There is plenty of official Microsoft documentation that can cover the basics and give you the marketing pitch. You can check out my pasts posts on the topic such as my series on Azure Private DNS and Azure Private Endpoints.

This particular customer chose to use them to consume the services over a private connection from both within Azure and on-premises as well as to mitigate the risk of data exfiltration that exists when egressing the traffic to Internet public endpoints or using Azure Service Endpoints. One of the additional requirements the customer had as to mediate the traffic to Azure Private Endpoints using a security appliance. The security appliance was acting as a firewall to control traffic to the Private Endpoints as well to perform deep packet inspection sometime in the future. This is the requirement that drove me down into the weeds of Private Endpoints and lead to a lot of interesting observations about the behaviors of network traffic flowing to and back from Private Endpoints. Those are the observations I’ll be sharing today.

For this lab, I’ll be using a slightly modified version of my simple hub and spoke lab. I’ve modified and added the following items:

  • Virtual machine in hub runs Microsoft Windows DNS and is configured to forward all DNS traffic to Azure DNS (168.63.129.16)
  • Virtual machine in spoke is configured to use virtual machine in hub as a DNS server
  • Removed the route table from the spoke data subnet
  • Azure Private DNS Zone hosting the privatelink.blob.core.windows.net namespace
  • Azure Storage Account named mftesting hosting some sample objects in blob storage
  • Private Endpoint for the mftesting storage account blob storage placed in the spoke data subnet
Lab environment

The first interesting observation I made was that there was a /32 route for the Private Endpoint. While this is documented, I had never noticed it. In fact most of my peers I ran this by were never aware of it either, largely because the only way you would see it is if you enumerated effective routes for a VM and looked closely for it. Below I’ve included a screenshot of the effective routes on the VM in the spoke Virtual Network where the Private Endpoint was provisioned.

Effective routes on spoke VM

Notice the next hop type of InterfaceEndpoint. I was unable to find the next hop type of InterfaceEndpoint documented in public documentation, but it is indeed related to Private Endpoints. The magic behind that next hop type isn’t something that Microsoft documents publicly.

Now this route is interesting for a few reasons. It doesn’t just propagate to all of the route tables of subnets within the Virtual Network, it also propagates to all of the route tables in directly peered Virtual Networks. In the hub and spoke architecture that is recommended for Microsoft Azure, this means that every Private Endpoint you create in a spoke Virtual Network is propagated to as a system route to route tables of each subnet in the hub Virtual Network. Below you can see a screen of the VM running in the hub Virtual Network.

Effective routes on hub VM

This can make things complicated if you have a requirement such as the customer I was working with where the customer wants to control network traffic to the Private Endpoint. The only way to do that completely is to create a /32 UDRs (user defined routes) in every route table in both the hub and spoke. With a limit of 400 UDRs per route table, you can quickly see how this may break down at scale.

There is another interesting thing about this route. Recall from effective routes for the spoke VM, that there is a /32 system route for the Private Endpoint. Since this is the most specific route, all traffic should be routed directly to the Private Endpoint right? Let’s check that out. Here I ran a port scan against the Private Endpoint using nmap using the ICMP, UDP, and TCP protocols. I then opened the Log Analytics Workspace and ran a query across the Azure Firewall logs for any traffic to the Private Endpoint from the VM and lo and behold, there is the ICMP and UDP traffic nmap generated.

Captured UDP and ICMP traffic

Yes folks that /32 route is protocol aware and will only apply to TCP traffic. UDP and ICMP traffic will not be affected. Software defined networking is grand isn’t it? ūüôā

You may be asking why the hell I decided to test this particular piece. The reason I followed this breadcrumb was my customer had setup a UDR to route traffic from the VM to an NVA in the hub and attempted to send an ICMP Ping to the Private Endpoint. In reviewing their firewall logs they saw only the ICMP traffic. This finding was what drove me to test all three protocols and make the observation that the route only affects TCP traffic.

Microsoft’s public documentation mentions that Private Endpoints only support TCP at this time, but the documentation does not specify that this system route does not apply to UDP and ICMP traffic. This can result in confusion such as it did for this customer.

So how did we resolve this for my customer? Well in a very odd coincidence, a wonderful person over at Microsoft recently published some patterns on how to approach this problem. You can (and should) read the documentation for the full details, but I’ll cover some of the highlights.

There are four patterns that are offered up. Scenario 3 is not applicable for any enterprise customer given that those customers will be using a hub and spoke pattern. Scenario 1 may work but in my opinion is going to architect you into a corner over the long term so I would avoid it if it were me. That leaves us with Scenario 2 and Scenario 4.

Scenario 2 is one I want to touch on first. Now if you have a significant background in networking, this scenario will leave you scratching your head a bit.

Microsoft Documentation Scenario 2

Notice how a UDR is applied to the subnet with the VM which will route traffic to Azure Firewall however, there is no corresponding UDR applied to the Private Endpoint. Now this makes sense since the Private Endpoint would ignore the UDR anyway since they don’t support UDRs at this time. Now you old networking geeks probably see the problem here. If the packet from the VM has to travel from A (the VM) to B (stateful firewall) to C (the Private Endpoint) the stateful firewall will make a note of that connection in its cache and be expecting packets coming back from the Private Endpoint representing the return traffic. The problem here is the Private Endpoint doesn’t know that it needs to take the C (Private Endpoint) to B (stateful firewall) to A (VM) because it isn’t aware of that route and you’d have an asymmetric routing situation.

If you’re like me, you’d assume you’d need to SNAT in this scenario. Oddly enough, due the magic of software defined routing, you do not. This struck me as very odd because in scenario 3 where everything is in the same Virtual Network you do need to SNAT. I’m not sure why this is, but sometimes accepting magic is part of living in a software defined world.

Finally, we come to scenario 4. This is a common scenario for most customers because who doesn’t want to access Azure PaaS services over an ExpressRoute connection vs an Internet connection? For this scenario, you again need to SNAT. So honestly, I’d just SNAT for both scenario 2 and 4 to make maintain consistency. I have successfully tested scenario 2 with SNAT so it does indeed work as you expect it would.

Well folks I hope you found this information helpful. While much of it is mentioned in public documentation, it lacks the depth that those of us working in complex environments need and those of us who like to geek out a bit want.

See you next post!

Force Tunneling Azure Firewall to pfSense – Part 1

Force Tunneling Azure Firewall to pfSense – Part 1

The Problem

Welcome back fellow geeks!¬† I hope you all are staying healthy and not going too stir crazy being stuck at home.¬† I’m here tonight to help break the monotony and walk you through a fun lab I recently put together.

I recently had a customer building out a sandbox environment for experimentation in Microsoft Azure.  For this environment the customer opted to setup a S2S VPN (site-to-site virtual private network) to establish connectivity between their on-premises data center and Azure.  The customer had requirements to use BGP (border gateway protocol) to exchange routes between on-premises and Azure.  Additionally, their security team required all Internet-bound traffic be piped back on-premises (force tunneling) through a set of security appliances before being egressed out to the Internet from their data center.

While I’ve setup connectivity with Azure in the past using an S2S VPN, it was with a policy-based VPN vs a route-based VPN that utilized BGP.¬† I’ve also worked with a lot of customers that had requirements for forced tunneling, but never got involved much in the implementation.¬† My customers typically use Microsoft ExpressRoute for connectivity with on-premises and a third-party NVA (network virtual appliance) like a Palo Alto or Imperva.¬† Since I’m not cool enough to have a lab with ExpressRoute and I’m too cheap to pay for an NVA, I’ve never had a chance to do the implementation myself.¬† ¬†This has meant relying on documentation and other folks within Microsoft that have had that experience.

Beyond the implementation gap in that pattern, I also have gaps in my BGP skill set.¬† While I’ve been lucky enough to play with a lot different technologies over the course of my career, enterprise routing was one area I never got to dive deep in.¬† Over my time at Microsoft and AWS, I’ve had to learn the concepts of the protocol and how to use it within the public cloud, but still have lacked any practical implementation experience.

If you know me, you know I hate not being able to implement the technologies I speak with customers about.¬† Hence, this blog post was born.¬† I’ll be walking you through the lab I built to address the gaps in my BGP and get some practical experience force tunneling traffic.¬† Enough with my blabbing, let’s get into it.

Lab Environment

Lab Environment

The complete lab setup I used is illustrated above.¬† In my home lab I’m using the 192.168.100.0/24 address range and have assigned the .1 address to the pfSense interface.¬† Another interface on the device has been configured for DHCP to receive a public IP address from my ISP.¬† Within Azure I’ve setup a single VNet (Virtual Network) assigned the address block of 10.0.0.0/16.¬† Within the VNet I’ve create five subnets each using a /24 block of address space (I’m terrible at subnetting).

Inside the GatewaySubnet I’ve provisioned a VPN VNG (Virtual Network Gateway) with the VpnGw2 SKU to support BGP.¬† The subnet named primary contains a single Windows Server 2016¬† VM (Virtual Machine) that I’ll be using to test the setup.¬† Azure Bastion sits in the Azure Bastion subnet providing me with remote access into the VM.

Finally, an Azure Firewall instance has been provisioned using the new forced tunneling feature in preview.¬† To support this feature, I’ve provisioned two subnets, one named AzureFirewallSubnet and one named AzureFirewallManagementSubnet¬† as well as two public IPs.¬† To route the traffic as needed, I’ve created three route tables with some user defined routes.

For this post I’m going to walk through the setup of the S2S VPN tunnel.¬† Anytime I can refer you to official documentation for a step-by-step process, I’ll include a hyperlink.¬† The steps that aren’t documented in a single place or documented at all will be the steps I’ll cover in detail.

The first thing you need to do is provision a VNet (Virtual Network).¬† The VNet must at least include a subnet named GatewaySubnet.¬† Microsoft requires this name for the subnet in order to deploy a VNG (Virtual Network Gateway).¬† You’ll additionally want to provision another subnet named whatever you want to hold the VM (virtual machine) to test connectivity with.¬† If you want to use Azure Bastion for remote access to the VM, you’ll need a third subnet which must be named AzureBastionSubnet.

While you’re twiddling your thumbs for 20 minutes waiting for the VNG, optional Bastion, and VM, you can create the local network gateway.¬† The local network gateway is a logical resource in Azure which represents your on-premises VPN appliance. To set this resource up you’ll need a few different items:

  • The public IP address in use by your VPN appliance
  • The BGP peer address you’ll be peering with Azure
  • The ASN (autonomous system number) you’re using on-premises

For this lab you’ll want to use a private ASN between 64512-65514 or 65521-65534.

Below is a screenshot of my configuration.¬† I included the entire address space I’m going to advertise, but if you’re using BGP you only need to include the addresses you’ll be using as BGP peer.

localgateway

Now that Azure is provisioning all your necessary resources, it’s a good time to bounce over to pfSense.¬† Note that pfSense doesn’t provide BGP support.¬† For that you’ll need to add the OpenBGPD package.¬† To do that you’ll navigate to the System drop down menu and choose Package Manger.¬† Search for¬†BGP and install the OpenBGP package.¬† Once complete you’ll see it as an installed package as seen below.packagemanager

Once the VPN Gateway has been provisioned you can begin configuration of the connection.¬† The connection is also represented in Azure as a logical resource.¬† There isn’t much to configure when you create the connection through the Portal.¬† If you configure it through PowerShell, CLI, or an ARM template, you’ll have the flexibility to tweak the configuration of the tunnel.¬† This includes the ability to limit the encryption ciphers and hashing algorithms supported on the Azure end.¬† Once the connection is provisioned, open up the resource blade for it, go to the Configuration menu item in the Settings section and toggle BGP to Enabled.

connection

Before you bounce over to pfSense and configure that end, you’ll need a few pieces of information from the VPN Gateway.¬† Within the Portal open up the VNG resource blade.¬† Note the public IP address that has been assigned to the VNG.¬† You’ll need this for the pfSense setup.Next click the Configuration menu item in the Settings section.¬† Here you’ll want to check off the Configure BGP ASN check box and note the ASN (by default 65515) and the BGP peer IP address because you’ll need them later.¬† Click Save once you complete.¬† This change will take around 5 minutes.

bgp

It’s now time to hop over to pfSense.¬† From the main menu navigate to the VPN drop down menu and choose the IPsec option.¬† You’ll first need to create a IKE Phase 1 entry to establish the authentication for the tunnel.In the General Information section ensure the Key Exchange Version box is populated with IKEv2 and the Remote Gateway is populated with the public IP address of the VNG.¬† In the Phase 1 Proposal (Authentication) section, choose to the Mutual PSK (Pre-Shared Key) option, the My identifier is set to My IP Address and Peer identifier set to Peer IP address.¬† Plus in the PSK you setup in Azure.In the Phase 1 Proposal (Encryption Algorithm) section pick your preferred encryption algorithm, key length, hashing algorithm, and Diffie-Hellman Group.The Azure end supports a number of cryptographic combinations just be aware you’ll need to configure a custom IPSec Policy using the CLI, PowerShell, or ARM template if you pick a combination that isn’t offered by default.¬† I’m not sure what it supports by default because I couldn’t find any documentation on it.¬† It seems like you’ll be forced to use DHGroup2 if you create through the Azure Portal, which you really shouldn’t be using due the small key length.¬† If you want to nerd out a bit, take a read through this document.¬† I wanted to bump this up to DHGroup24, so I opted to create the custom IPSec policy with the configuration below.

ipsecpol = New-AzIpsecPolicy -IkeEncryption AES256 -IkeIntegrity SHA256 -DhGroup Dhgroup24 -IpsecEncryption GCMAES256 -IpsecIntegrity GMAES256 -PfsGroup None -SALifeTimeSeconds 28800  

Next up you need to configure a Phase 2 entry which will control how traffic is carried across the tunnel.¬† Expand the Phase 1 entry you created and click the Add P2 button to add a phase 2 entry.¬† In the General Information section you’ll want to set the Local Network option to Network with an address of 0.0.0.0/0.¬† This will allow us to tunnel traffic to any address through the VPN tunnel which will support our use case for the forced tunneling we’ll create later on.¬† In the Remote Network section, set it to the CIDR block of the VNet.In the Phase 2 proposal configure the settings to support whatever encryption setup you’re using.¬† For my configuration, I set it up as seen in the screenshot below.

phase2
Once the phase 2 entry is configured, navigate to the Status drop-down menu and choose IPsec.  Click the Connect button and assuming you configured everything correctly, the status shift from Disconnected, to Connecting, and will end on Established as seen below.
ipsecstatus

Hurray, you have an established VPN tunnel.¬† Now it’s time to configure BGP.

Since you’ve already toggled the appropriate options in Azure to support BGP, it’s now time to configure it in pfSense.¬† You will first need to create a firewall rule to allow the BGP traffic to flow between Azure and the pfSense box.¬† To do this you’ll select the Firewall drop-down menu and choose the Rules option.¬† Create a new rule to allow TCP port 179 from the source of the Azure BGP peer IP you noted earlier to the pfSense interface IP for the network you’re connecting to Azure.

firewallrule1

Next you have to open the Services drop-down menu and choose OpenBGPD.In this section you have a few menu options, one which allows you to modify the raw config.¬† Like the idiot I am, I ignored the comment at the beginning of the raw config that says not to edit it.¬† After editing it, I was unable to configure using the menu options.¬† If you’re not an idiot like me, you should be able to configure it using the menus.¬† My working config is illustrated below.

bgpconfigOnce you have your Config set, save it and give it a minute.¬† The navigate to the Status section of the OpenBGPD service.¬† Scroll to the bottom and check out the OpenBGPD Neighbors section.¬† If you’ve misconfigured anything you’ll receive an error that the log file can’t be written (useful right?)

bgpstatus

Additionally when I check the effective routes for the network interface of the VM in Azure I can see the routes propagating into the VM’s subnet.

routes

You can validate your connectivity at this point in any number of ways.¬† I went the lazy route and used pfSense’s Test Port capability located in the Diagnostics drop-down menu.¬† Make sure that you open the appropriate rules in any NSGs between you and the VM.¬† Also consider the VM’s host firewall if you opt to use a non-standard port or protocol like ICMP.¬† If you opt to test from Azure back on-premises, make sure to open the appropriate firewall rules in the pfSense firewall for the IPSec interface.

connectiontest

With that you have a working S2S VPN complete with BGP exchange of routes.¬† That will wrap up this post.¬† In the next post I’ll walk through the configuration of forced tunneling with Azure Firewall.

Continue the journey in the second post.