Azure Files and AD DS – Part 1

Azure Files and AD DS – Part 1

Welcome back folks.

I recently had a few customers reach out to me with questions around Azure Files integration with Windows Active Directory Domain Services (AD DS). Since I had never used it, I decided to build a small lab and test the functionality and better understand the service. In this series I’ll be walking through what the functionality provides, how I observed it working, and how to set it up.

In any enterprise you will have a number of Windows file shares hosting critical corporate data. If you’ve ever maintained or supported those file shares you’re quite familiar with the absolute sh*t show that occurs across an organization with the coveted H drive is no longer accessible. Maintaining a large cluster of Windows Servers backing corporate file shares can be significantly complex to support. You have your upgrades, patches, needs to scale, failures with DFS-R (distributed file system replication) or FRS-R (file replication service replication) for some of you more unfortunate souls. Wouldn’t it be wonderful if all that infrastructure could be abstracted and managed by someone else besides you? That is the major value proposition of Azure Files.

Azure Files is a PaaS (platform-as-a-service) offering provided by Microsoft Azure that is built on top of Azure Storage. It provides fully managed file shares over a protocol you know and love, SMB (Server Message Block). You simply create an Azure File share within an Azure Storage account and connect to the file share using SMB from a Windows, Linux, or MacOS machine.

Magic right? Well what about authentication and authorization? How does Microsoft validate that you are who you say you are and that you’re authorized to connect to the file share? That my friends will be what we cover from this point on.

Azure File shares supports methods of authentication:

  1. Storage account access keys
  2. Azure AD Domain Services
  3. AD DS (Active Directory Domain Services)

Of the three methods, I’m going cover authentication using AD DS (which I’ll refer to as Windows AD).

Support for Windows AD with Azure Files graduated to general availability last month. As much as we’d like it to not be true, Windows AD and traditional SMB file shares will be with us many years to come. This is especially true for enterprises taking the hybrid approach to cloud, which is a large majority of the customer base I work with. The Windows AD integration allows organizations to leverage their existing Windows AD identities for authentication and protect the files using Windows ACLs (access control lists) they’ve grown to love. This provides a number of benefits:

  • Single sign-on experience for users (via Kerberos)
  • Existing Windows ACLs can be preserved if moving files to a Azure File share integrated with Windows AD
  • Easier to supplement existing on-premises file servers without affecting the user experience
  • Better support for lift and shift workloads which may have dependencies on SMB
  • No infrastructure to manage
  • Support for Azure File Sync which allows you to store shares in Azure Files and create cache on Windows Servers

There are a few key dependencies and limitations I want to call out. Keep in mind you’ll want to visit the official documentation as these will change over time.

  • The Windows AD identities you want to use to access the file shares must be synchronized to Azure AD (we’ll cover the why later)
  • The storage account hosting the Azure File share must be in the same tenant you’re syncing the identities to
  • Linux VMs are not supported at this time
  • Services using the computer account will not be able to access an Azure File share so plan on using a traditional service account (aka User account) instead
  • Clients accessing the file share must be Window 7/Server 2008 R2 ore above
Lab Environment

So I’ve given you the marketing pitch, let’s take a look at the lab environment I’ll be using for this walkthrough.

For my lab I’ve provisioned three VMs (virtual machines) in a VNet (virtual network). I have a domain controller which provides the jogcloud.local Windows AD forest, an Azure AD Connect server which is synchronizing users to the jogcloud Azure AD tenant, and a member server which I’ll use to access the file share.

In addition to the virtual machines, I also have an Azure Storage account where I’ll create the shares. I’ve also configured a PrivateLink Endpoint which will allow me to access the file share without having to traverse the Internet. Lastly, I have an Azure Private DNS zone hosting the necessary DNS namespace needed to handle resolution to my Private Endpoint. I won’t be covering the inner workings of Azure Private DNS and Private Endpoints, but you can read my series on how those two features work together here.

In my next post I’ll dive in how to setup the integration, walk through some Wireshark and Fiddler captures, and walk through some of the challenges I ran into when running through this lab for this series.

See you next post!

Azure Key Vault, Certificates, and Python, oh my!

Azure Key Vault, Certificates, and Python, oh my!

Welcome back folks! I recently had a few customers ask me about using certificates with Azure Key Vault and switching from using a client secret to a client certificate for their Azure AD (Active Directory) service principals. The questions put me on a path of diving deeper around the topics which results in some great learning and opportunity to create some Python code samples.

Azure Key Vault is Microsoft’s solution for secure secret, key, and credential management. If you’re coming from the AWS (Amazon Web Services) realm, you can think of it as AWS KMS (Key Management Services) with a little bit of AWS Secrets Manager and AWS Certificate Manager thrown in there. The use cases for secrets and keys are fairly well known and straightforward, so I’m going instead focus time on the certificates use case.

In a world where passwordless is the newest buzzword, there is an increasing usage of secrets (or passwords) in the non-human world. These secrets are often used to programmatically interact with APIs. In the Microsoft world you have your service principals and client secrets, in the AWS world you have your IAM Users with secret access keys, and many more third-parties out there require similar patterns that require the use of an access key. Vendors like Microsoft and AWS have worked to mitigate this growing problem in the scope of their APIs by introducing features such as Azure Managed Identities and AWS IAM Roles which use short lived dynamic secrets. However, both of these solutions work only if your workload is running within the relevant public cloud and the service it’s running within supports the feature. What about third-party APIs, multi-cloud workloads, or on-premises workloads? In those instances you’re many of times forced to fall back to the secret keys.

There is a better option to secret keys, and that is client certificates. While a secret falls into the “something you know” category, client certificates fall into the “something you have” category. They provide an higher assurance of identity (assuming you exercise good key management practices) and can have more flexibility in their secure storage and usage. Azure Service Principals support certificate-based authentication in addition to client secrets and Azure Key Vault supports the secure storage certificates. Used in combination, it can yield some pretty cool patterns.

Before I get into those patterns, I want to cover some of the basics in how Azure Key Vault stores certificates. There are some nuances to how it’s designed that is incredibly useful to understand. I’m not going to provide a deep dive on the inner workings of Key Vault, the public documentation does a decent enough job of that, but I am going to cover some of the basics which will help get you up and running.

Certificates can be both imported into and generated within Azure Key Vault. These certificates generated can be self-signed, generated from a selection of public CAs (certificate authorities) it is integrated with, or can be used to generate a CSR (certificate signing request) you can full-fill with your own CA. These processes are well detailed in the documentation, so I won’t be touching further on them.

Once you’ve imported or generated a certificate and private key into Key Vault, we get into the interesting stuff. The components of the certificate and private key are exposed in different ways through different interfaces as seen below.

Key Vault and certificates

Metadata about the certificate and the certificate itself are accessible via the certificates interface. This information includes the certificate itself provided in DER (distinguished encoded rules) format, properties of the certificate such as the expiration date, and metadata about the private key. You’ll use this interface to get a copy of the certificate (minus private key) or pull specific properties of the certificate such as the thumbprint.

Operations using the private key such as sign, verify, encrypt, and decrypt, are made available through the key interface. Say you want to sign a JWT (JSON Web Token) to authenticate to an API, you would use this interface.

Lastly, the private key is available through the secret interface. This is where you could retrieve the private key in PEM (privacy enhanced mail) or PKCS#12 (public key cryptography standards) format if you’ve set the private key to be exportable. Maybe you’re using a library like MSAL (Microsoft Authentication Library) which requires the private key as an input when obtaining an OAuth access token using a confidential client.

Now that you understand those basics, let’s look at some patterns that you could leverage.

In the first pattern consider that you have a CI/CD (continuous integration / continuous delivery) running on-premises that you wish to use to provision resources in Azure. You have a strict requirement from your security team that the infrastructure remain on-premises. In this scenario you could provision a service principal that is configured for certificate authentication and use the MSAL libraries to authenticate to Azure AD to obtain the access tokens needed to access the ARM API (Azure Resource Manager). Here is Python sample code demonstrating this pattern.

On-premises certificate authentication with MSAL

In the next pattern let’s consider you have a workload running in the Azure AD tenant you dedicate to internal enterprise workloads. You have a separate Azure AD tenant used for customer workloads. Within an Azure subscription associated with the customer tenant, there is an instance of Azure Event Hub you need to access from a workload running in the enterprise tenant. For this scenario you could use a pattern where the workload running in the enterprise tenant uses an Azure Managed Identity to retrieve a client certificate and private key from Key Vault to use with the MSAL library to obtain an access token for a service principal in the customer tenant which it will use to access the Event Hub.

Here is some sample Python code that could be used to demonstrate this pattern.

For the last pattern, let’s consider you have the same use case as above, but you are using the Premium SKU of Azure Key Vault because you have a regulatory requirement that the private key never leaves the HSM (hardware security module) and all cryptographic operations are performed on the HSM. This takes MSAL out of the picture because MSAL requires the private key be provided as a variable when using a client certificate for authentication of the OAuth client. In this scenario you can use the key interface of Key Vault to sign the JWT used to obtain the access token from Azure AD. This same pattern could be leveraged for other third-party APIs that support certificate-based authentication.

Here is a Python code sample of this pattern.

Well folks I’m going to keep it short and sweet. Hopefully this brief blog post has helped to show you the value of Key Vault and provide some options to you for moving away from secret-based credentials for your non-human access to APIs. Additionally, I really hope you get some value out of the Python code samples. I know there is a fairly significant gap in Python sample code for these types of operations, so hopefully this begins filling it.

Thanks!

Python Sample Web App and API for Azure AD B2C

Python Sample Web App and API for Azure AD B2C

Hello again folks.

I’ve recently had a number of inquiries on Microsoft’s AAD (Azure Active Directory) B2C (Business-To-Consumer) offering. For those infrastructure folks who have had to manage customer identities in the past, you know the pain of managing these identities with legacy solutions such as LDAP (Lighweight Directory Access Protocol) servers or even a collection of Windows AD (Active Directory) forests. Developers have suffered along with us carrying the burden of securely implementing the technologies into their code.

AAD B2C exists to make the process easier by providing a modern IDaaS (identity-as-a-service) offering complete with a modern directory accessible over a Restful API, support for modern authentication and authorization protocols such as SAML, Open ID Connect, and OAuth, advanced features such as step-up authentication, and a ton of other bells and whistles. Along with these features, Microsoft also provides a great library in the form of the Microsoft Authentication Library (MSAL).

It had been just about 4 years since I last experimented with AAD B2C, so I was do for a refresher. Like many people, I learn best from reading and doing. For the doing step, I needed an application I could experiment with. My first stop was the samples Microsoft provides. The Python pickings are very slim. There is a basic web application Ray Lou put together which does a great job demonstrating basic authentication (and I used as a base for my web app). However, I wanted to test additional features like step-up authentication and securing a custom-built API with AAD B2C.

So began my journey to create the web app and web API I’ll be walking through setting up with this post. Over the past few weeks I spent time diving into the Flask web framework and putting my subpar Python skills to work. After many late nights and long weekends spent reading documentation and troubleshooting with Fiddler, I finished the solution which costs of a web app and web API.

Solution Design

The solution is quite simple . It is intended to simulate a scenario where a financial services institution is providing a customer access to their insurance policy information. The insurance policy information is pulled from a legacy system that has been wrapped in a modern web-based API. The customer can view their account information and change the beneficiary on the policy.

AAD B2C provides the authentication to the application via Open ID Connect. Delegated access to the user’s policy information in the web API is handled through OAuth via an access token obtained by the web app from AAD B2C. Finally, when changing the beneficiary, the user is prompted for MFA which demonstrates the step-up authentication capabilities of AAD B2C.

The solution uses four User Flows. It has a profile editing user flow which allows the user to change information stored in the AAD B2C directory about the user, a password reset flow to allow the user to change the password for their local AAD B2C identity and two sign-up / sign-in flows. One sign-in / sign-up flow is enabled for MFA (multi-factor authentication) and one is not. The non-MFA enabled flow is the kicked off at login to the web app while the MFA enabled flow is used when the user attempts to change the beneficiary.

With the basics on the solution explained, let’s jump in to how to set it up. Keep in mind I’ll be referring to public documentation where it makes sense to avoid reinventing the wheel. For prerequisites you’ll need to have Python 3 installed and running on your machine, a code editor (personally I use Visual Studio Code), and an Azure AD tenant and Azure subscription.

The first think you’ll want to do is setup a AAD B2C directory as per these instructions. Once that is complete, you’ll now want to register the web app and web API. To do this you’ll want to switch to your B2C directory and select the App registrations link. On the next screen select the New registration link.

In the Register an application screen you’ll need to fill in some information. You can name the application whatever you’d like. Leave the Who can use this application or access this API set to the option in the screenshot since this will be a B2C app. In the redirect URI put in http://localhost:5000/getAToken. This is the URI AAD B2C will redirect the user to after the user successfully authenticates and obtains an ID token and access token. Leave the Grant admin consent to openid and offline_access permissions option checked. Once complete hit the Register button. This process creates an identity for the application in the B2C directory and authorizes it to obtain ID tokens and access tokens from B2C.

Register an application

Next we need to gather some information from the newly registered application. If you navigate back to App registration you’ll see your new app. Click on it to open up the application specific settings. On the next screen we get the Application ID (Client ID in OAuth terms). Record this because you’ll need it later.

App registration

So you have an client ID which identifies us to B2C, but you need to a credential to authenticate with. For that you’ll go to Certificates & secrets link. Click on the New client secret button to generate a new credential and save this for later.

You will need to register one additional redirect URI. This redirect URI is used when the user authenticates with MFA during the step-up process. Go back to the Overview and click on the Redirect URIs menu item on the top section.

Register additional URI

Once the new page loads, add a redirect URI which is found under the web section. The URI you will need to add is http://localhost:5000/getATokenMFA. Make sure to save your changes by hitting the Save button.

Now that your web app is registered, you need to register the web API. Once again, click the Register an application link under the Application view. You’ll want to select the same options except you do not need to provide a redirect URI. Click the Register button to complete the registration.

Once the app is registered, go into the application settings and click on the Expose an API link. Here you need to list two scopes which are included in the token and authorize access to the app to perform specific functions such as reading account information and writing account information in the API. Click the Add a scope button and you’ll be prompted to set an Application ID URI which you need to set to api. Once you’ve set it, hit the Save and continue button.

Application ID URI

The screen will refresh you’ll be able to add your first scope. I have defined two scopes within the web API, one called Accounts.Read which grants access to read account information and one for Accounts.Write which grants access to edit account information. Create the scope for the Accounts.Read and repeat the process for Accounts.Write.

By default B2C grants application registered with it the offline_access and openid permissions for Microsoft Graph. Since our API won’t be authenticating the user and will simply be verifying the access token passed by the web app, we could remove those permissions if we want. You can do this through the API permissions link which is located on the application settings page.

The last step you have in the B2C portion of Azure is to grant your web app the permission to request an access token for the Accounts.Read and Accounts.Write scopes used by the web API. To do this you need to go back into the application settings for the web app and go to the API permissions link. Click the Add a permission button. In the Request API permissions window, select My APIs option and select the web API you registered. From there select the two permissions and click the Add permissions button.

To finish up with the permissions piece you’ll grant admin consent to permissions. At the API permissions window, click the Grant admin consent for <YOUR TENANT NAME> button.

Admin consent to permissions

At this point we’ve registered the web app and web API with Azure B2C. We now need to enable some user flows. Azure B2C has an insanely powerful policy framework that powers the behavior of B2C behind the scenes that allow you to do pretty much whatever you can think of. User flows are predefined policies that provide common tasks without you having to go an learn a new policy framework.

For this solution you will need to setup four user flows. You’ll need to create a sign-up and sign-in flow with MFA disabled, another with MFA-enabled, a password reset flow, and profile editing flow. Name the flows as listed in the image below.

User flow list

The sign-up and sign-in flow can be configured to collect and return specific claim values in the user’s ID token. You configure this by clicking on the user flow you want to modify and navigating to the user attributes and application claims sections as seen in the image below.

Sign in / Sign up claims/attributes

For this solution ensure you are collecting and returning in claims at least the Display Name, Email Address, Given Name, and Surname. Configure this in both the MFA and non-MFA enabled sign-up and sign-in policies.

Now it’s time to get the apps up and running. For this section I’ll be assuming you’re using Visual Studio Code.

Open up an instance of Visual Studio Code and clone the repository https://github.com/mattfeltonma/python-b2c-sample. For lazy people like myself, I use the CTRL+SHIFT+P to open up the command windows in Visual Studio, search for the Git:Clone command and type in the directory.

Once the repo has cloned you’ll want to open two Terminal instances. You can do this with CTRL+SHIFT+` hotkey. In your first terminal navigate python-b2c-web-app directory. Create a new directory named flask_session. This directory will serve as the store for server side sessions for the flask-sessions module.

In the same directory create a new directory named env and create a Python virtual environment. Repeat the process in the other terminal under the python-simple-web-api directory.

You do this using the command:

python -m venv env

Once the virtual environments will need to activate the virtual environments. You will need to do this for both the web app and web API. Do this using the following command while you are under the relevant b2c-web-app or simple-web-api directories:

env\Scripts\activate

Next, load the required libraries. Do this using the command below. Repeat it again for both the web app and web api.

pip install -r requirements.txt

The environments are now ready to go. Next up you need to set some user variables. Within the terminal for the b2c-web-app create user variables for the following:

  • CLIENT_ID set to client secret of the web app. This is available in the B2C portal
  • CLIENT_SECRET set to the secret of the web app. If you didn’t record this when you generated it you will need to create a new credential.
  • B2C_DIR set to the name of your B2C tenant. This should be a single label name such as myb2c
  • FLASK_APP set too app.py. This tells Flask what file to execute.

Within the terminal for the simple-web-api create user variables for the following:

  • TENANT_NAME set to the name of your B2C tenant. This should be a single label name such as myb2c
  • TENANT_ID set to the tenant ID of your B2C tenant.
  • B2C_POLICY set to the name of your non-MFA B2C policy. For example, B2C_1_signupsignin1.
  • CLIENT_ID set to the client ID of the web API. This is available in the B2C portal
  • FLASK_APP set to app.py. This tells Flask what file to execute.

In Windows you can set these variables by using the following command:

set FLASK_APP=app.py

The last step you need to take is to modify the accounts.json file in the python-simple-web-api directory. You will need to add sample records for each user identity you want to test. The email value of the account object must match the email address being presented in the access token.

Now you need to startup the web server. To do this you’ll use the flask command. In the terminal you setup for the python-b2c-web-app, run the following command:

flask run -h localhost -p 5000

Then in the terminal for the python-simple-web-api, run the following command:

flask run -h localhost -p 5001

You’re now ready to test the app! Open up a web browser and go to http://localhost:5000. You’ll be redirected to the login page seen below.

Clicking the Sign-In button will open up the B2C sign-in page. Here you can sign-in with an existing B2C account or create a new one.

After successfully logging in you’ll be presented with a simple home page. The Test API link will bring you to the public endpoint of the web API validating that the API is reachable and running. The Edit Profile link will redirect you to the B2C Edit Profile experience. Clicking the My Claims link will display the claims in your ID token as seen below.

Clicking the My Account calls the web API and queries the accounts.json file for a record for the user. The contents of the record are then displayed as seen below.

Clicking on the Change Beneficiary button will kick off the second MFA-enabled sign-in and sign-up user flow prompting the user for MFA. After successful MFA, the user is redirected to a page where they make the change to the record. Clicking the submit button modifies the accounts.json file with the new value and the user is redirected to the My Account page.

Well folks that’s it! Experiment, enjoy, and improve it. Over the next few weeks I’m going planning on putting some newly practiced Docker skills to the test by containerizing the applications.

You can get the solution here.

Thanks everyone!

Force Tunneling Azure Firewall to pfSense – Part 2

Force Tunneling Azure Firewall to pfSense – Part 2

Welcome back to my series on forced tunneling Azure Firewall using pfSense.  In my last post I covered the background of the problem I wanted to solve, the lab makeup I’m using, and the process to setup the S2S (site-to-site) VPN with pfSense and exchange of routes over BGP.  Take a few read through that post before jumping into this one.

At this point you should a working S2S VPN from your Azure VNet to your pfSense router and the two should be exchanging a few routes over BGP.  If you didn’t complete all the steps in the first post, go back and do them now.

Now that connectivity is established, it’s time to incorporate Azure Firewall.  Azure Firewall was introduced back in 2018 as a managed stateful firewall that can act as an alternative to rolling your own NVAs (network virtual appliances) like a Palo Alto or Checkpoint firewall.  Now I’m not going to lie to you and tell you it has all the bells and whistles that a 3rd party NVA has, but it can provide a reasonable alternative depending on what your needs are.  The major benefit is it’s a managed service to Microsoft owns the responsibility of managing the health of the service, its high availability and failover,  it’s closely integrated with the Azure platform, more than likely cheaper than what you’d pay for a 3rd-party NVA license.

Recently, Microsoft has introduced support for forced tunneling into public preview.  This provides you with the ability to send all of the traffic received by Azure Firewall on to another security stack that may exist within Azure, on-premises, or in another cloud. It helps to address some of the capability gaps such as lack of support for (DPI) deep packet inspection for Internet-bound traffic.  You can leverage Azure Firewall to transitively route and mediate traffic between on-premises and Azure, hub-spoke, and spoke to spoke while passing Internet bound traffic on to another security stack with DPI capabilities.

With that out of the way, let’s continue with the lab.

The first thing you’ll want to do is to deploy an instance of Azure Firewall.  To support forced tunneling, you’ll need to toggle the option to enabled.  You then need to provide another public IP address.  What’s happening here is the nodes are being created with two NICs (network interface cards).  One NIC will live in the AzureFirewallSubnet and one will live in the AzureFirewallManagementSubnet.  Traffic dedicated to Microsoft’s management of the nodes will go out to the Internet (but remains on Microsoft’s backbone) through the NIC in the AzureFirewallManagementSubnet.  Traffic from your VMs will exist the NIC in the AzureFirewallSubnet.  This split also means you can now attach a UDR (user defined route) to the AzureFirewallSubnet to route that traffic to your own security stack.

azfwsetup

The Azure Firewall instance will take about 10-20 minutes to provision.  While you’re waiting you need to prepare the Virtual Network Gateway for forced tunneling.

Now if you go Googling, you’re going to come across this Microsoft article which describes setting a GatewayDefaultSite for the VPN Gateway.  While you can do it this way and you opt for an active/active both on-premises and for the VPN Gateway configuration, you’ll need to need to flip this setting to the other local network gateway (your other router) in the event of a failover.

As an alternative solution you can propagate a default route via BGP from your on-premises router into Azure.  ECMP will be used by default and will spread the traffic across all available tunnels.  If one of your on-premises routers goes down, traffic will still be able to flow back on-premises without requiring you to fail anything over on the Azure end.  Note that if you want make one of your routers preferred, you’ll have to try your luck with AS Path Prepending.

For this lab scenario, I opted to broadcast a default route via BGP.  My OpenBGPD config file is pictured below.  Notice I’ve added a default route to be propagated.

openbgpd-config

Hopping over to Azure and enumerating the effective routes shows the new routes being propagated into the VNet via the VPN Gateway.

vnetroutes

With this configuration, all traffic without a more specific route (like all our Internet traffic) will be routed back to the VPN Gateway.  Since this lab calls for this traffic to be sent to Azure Firewall first, you’ll need to configure a UDR (user defined route).  As described in this link, when multiple routes exist for the same prefix, Azure picks from UDRs first, then BGP, and finally system routes.

For this you’re going to need to set up three route tables.

One routing table will be applied to the primary subnet the VM is living in.  This will contain a UDR for the default route (0.0.0.0/0) with a next hop type of Virtual appliance and next hop address of the Azure Firewall instance’s NIC in the AzureFirewallSubnet.  By order of

udrprimary

The second routing table will be applied to the AzureFirewallSubnet.  This will contain a UDR for the default route with a next hop of the Virtual network gateway.  This forces Azure Firewall to pipe all the VM traffic bound for the networks outside the VNet to the Virtual Network Gateway which will then tunnel it through the VPN tunnel.

routefirewall

Last but not least, you have an optional route table you can add.  This route table will be applied to the AzureFirewallManagementSubnet and will be configured with Virtual Network Gateway route propagation disabled.  It will have a single UDR with a default route and next hop type of Internet.  The reason I like adding this route table is it avoids the risk of someone propagating a default route from on-premises.  If this route were to be propagated to the AzureFirewallManagementSubnet, the management plane would see it down and may deallocate the instance.

routemgmt

The last thing you need to do in Azure is create a rule in Azure Firewall to allow traffic to the web.  For this I created a very simple application rule allowing all HTTP and HTTPS traffic to any domain.

azfirewallrule

 

At this point the Azure end of the configuration is complete.  We now need to hop over to pfSense and finish that configuration.

Remember back in the last post when I had you configure the phase 2 entry with a local network of 0.0.0.0/0?  That was the traffic selector which allows traffic destined for any network from the VNet to flow through our VPN tunnel.

Now you have a requirement to NAT traffic from the VNet out the WAN interface on the pfSense box.  For that you have to navigate to the Firewall drop-down menu and choose the NAT menu item.  From there you’ll navigate to the Outbound option and ensure your Outbound NAT Mode is set to Hybrid Outbound NAT rule generation since we’ll continue to leverage the automatic rules pfSense creates as well as this new custom rule.

Add a new mapping by clicking the Add button.  For this you’ll want to configure it as seen in the screenshot below.  Once complete save the new rule and new mappings.

nat

Last but not least, we need to open flows within the pfSense firewall to allow the traffic to go out to the Internet over HTTP and HTTPS as seen below.

pfsensefw

You’re done!  Now time to test the configuration.  For this you’ll want to RDP into your VM, open up a web browser, and try to hit a website.

google

Excellent, so you made it out to the web, but how do you know you were force tunneled through?  Simple!  Just hit a website like https://whatismyipaddress.com and validate the IP returned is the IP associated with your pfSense WAN interface.

One thing to note is that if you deallocate and reallocate your Azure Firewall or delete and recreate your Azure Firewall after everything is in place, you may run into an issue where forced tunneling doesn’t seem to work.  All you need to do is bring down the VPN tunnel and bring it back up again.  There is some type of dependency there, but what that is, I don’t know.

Well that’s it folks.  Hope you enjoyed the series and got some value out of it.  Azure Firewall is a solid alternative to a self-managed NVA.  Sure you don’t get all the bells and whistles, but you get key capabilities such as transitive routing and features that build on NSGs such as filtering traffic via FQDN, centralized rule management, and centralized logging of what’s being allowed and denied through your network.  As an added bonus, you can always leverage the forced tunneling feature you learned about today to tunnel traffic to a security stack which can perform features Azure Firewall can’t such as deep packet inspection.

Stay healthy!