Azure Private Link and DNS – Part 1

Azure Private Link and DNS – Part 1

Hi there fellow geeks!

Azure Private Link is becoming a frequent topic of discussion among peers and my customers.  One of the often discussed topics is how to handle DNS with Private Link Endpoints.  I spent the past few days deep diving into the documentation and doing some labbing to better understand what the patterns and gotchas were.  There seemed to be enough value to the findings to share it with you all.

Before I dive into the guts of Private Link Endpoints, I want to spend a post walking through how Private Link came to be.

Last September Microsoft released the Azure Private Link service.  One of the primary drivers behind the introduction of the service was to address the customer demand for secure and private connectivity to Azure services such as Azure SQL and Azure Storage as well as third-party services.  Azure PaaS services used to be accessible only via public IP addresses which required a path out to the Internet. From a network security perspective, your only option to use the firewall feature built into many of the services to filter the IPs allowed to communicate with the service.  While technically feasible, there had to be something better.

The first attempt at something better was Service Endpoints, which started to be introduced into general availability in February 2018.  For you AWS folk, the Service Endpoints are probably closest to VPC Gateway Endpoints.  Service Endpoints attempted to improve the experience of accessing the services from a VNet (virtual network) by providing a direct route for resources in a VNet (virtual network) to Azure services in order to optimize routing.  To mitigate the risk of the service being accessible over an public IP, Service Endpoints also added an identity to the VNet.  This allowed customers to expand context of the filtering being done by the service firewall beyond IP to the identity of the VNet containing resources that need to access the relevant service.

Service Endpoints

Service Endpoints

While Service Endpoints made some great improvements there was more work to be done.  Service Endpoints did nothing to mitigate the risk of data exfiltration.  If an attacker was able to compromise a VM (virtual machine) in your VNet, that attacker could use that optimized route to their advantage piping whatever data they were able to get access to out to an attacker controlled instance of the resource such as an Azure Storage Account.  Service Endpoint policies were then introduced to help address this risk.

Well that’s great an all, but Service Endpoints did nothing to address accessing Azure services from outside the VNet such as from an on-premises data center or another public cloud.  Customers were still stuck accessing the services over the Internet or using an ExpressRoute using Microsoft Peering.  Wouldn’t it be great there was a service with all of those features?

In comes Azure Private Link to the rescue.  Azure Private Link includes the concept of an Azure Private Link Service and Private Link Endpoint.  Those of you coming from AWS, yeah, I’ll let you guess which AWS service this is like :-).  I won’t be covering Private Link Services in this series beyond saying it’s way to build your own third party services and make them directly accessible from a customer VNet.  Instead we’ll keep our focus on Private Link Endpoints, specifically in the context of Microsoft-provided services.

The Private Link services introduces two new features that seek to address the gaps Service Endpoints did not and to include features from Service Endpoints that were beneficial.  These features are:

  • Private access to services running on the Azure platform through the provisioning of a virtual network interface within the customer VNet that is assigned one of the VNet IP addresses from the RFC1918 address space.
  • Makes the services accessible over private IP space to resources running outside of Azure such as machines running in an on-premises data center or virtual machines running in other clouds.
  • Protects against data exfiltration by the endpoint providing access to only a specific instance of a PaaS service.
Azure Private Link

Azure Private Link

As you can see from the above, the service solves a lot of problems and is going to be a necessary component of any Azure footprint.  Now when it comes to design and implementation, there are some options as to how you use DNS to resolve the name of the service resource being exposed by the endpoint to the private IP address of the Private Link Endpoint.  This is what I’ll be focusing on for this series.

In the next post I’ll walk you through what happens within Azure DNS when you create a Private Link endpoint, some patterns you can use for DNS resolution, and some of the gotchas.

The series is continued in my second post.

DNS in Microsoft Azure – Part 3

DNS in Microsoft Azure – Part 3

Today I’ll be continuing my series on DNS in Microsoft Azure.  In my first post I covered fundamental concepts of DNS resolution in Azure such as the 168.63.129.16 virtual IP and Azure-provided DNS.  In the second post I went over the Azure Private DNS service, it’s benefits, limitations, and available patterns when you use Azure Private DNS alone.  In this post I’ll be exploring how, when combined with bring your own DNS (BYODNS), Azure Private DNS begins to really shine and introduces opportunities some very cool self-service/delegation models.

If an enterprise has any degree technical footprint, it will have a DNS infrastructure providing DNS resolution to intranet and Internet resources.  These existing services are often very mature and deeply embedded into the technology stack.  This means the likelihood of ditching your existing DNS service for a cloud-based DNS service isn’t going to happen out of the gates (if at all).  This leaves you with the question of extending your existing DNS infrastructure into Azure as is, or hooking it into cloud native DNS services such as Azure Private DNS.  I’m not going to give you the typical sales pitch stating how easy it is to do the latter, because it can be challenging depending on how complex your DNS infrastructure is and what your internal policies and operations models are.  Instead I’m going to show you how you can make these two services coexist and compliment each other.

As I covered in my first post, you can configure the VMs to use either Azure DNS servers or your own DNS servers.  This configuration is available at both the VNet level and VM network interface level.  Avoid setting the DNS server settings directly on the VM’s network interface if possible because it will introduce more management overhead.  There are always exceptions to the rule, but make sure establish what those exceptions are and have a way of tracking them.

So you’ve decided you’re going to BYODNS.  Common reason for doing this are:

  1. Hybrid workloads that require access to on-premises services
  2. Advanced capabilities of existing DNS services
  3. Requirements for Windows Active Directory for centralized identity, authentication, and optionally configuration management services
  4. Maintaining a singular management plane for all DNS services across an organization

Since the requirement around Windows Active Directory services is the most common reason in my experiences, I’m going to cover that use case.  Keep in mind that you could easily sub in your favorite DNS infrastructure service for the DNS patterns I demonstrate in this post.  Yes, this means you could toss in a BIND server or InfoBlox NVA.

With that settled, let’s cover the basics.

In the BYODNS scenario, you’ll want to configure your own DNS servers as seen in the screenshot below (note that you should include at least two DNS servers for redundancy):

dnservers.PNG

When configured to use a specific set of DNS servers, a few things happen at the VM.  The screenshot below is the results of an ipconfig /all on a domain-joined Windows Server 2016 VM.  First you’ll notice that the DNS server being pushed to the VM is the 10.100.4.10 address which is the DNS server setting I’m pushing at the VNet.  The other thing to take note of is the Connection-specific DNS suffix being pushed by the Azure DHCP service is no longer the Azure-provided (xxx.xxx.internal.cloudapp.net).  It’s now reddog.microsoft.com which is a non-functioning placeholder.  This is pushed to avoid interfering with DNS resolution through BYODNS such as the domain-joined scenario I’m demonstrating.ipconfig.png

The lab environment I’m using for this post looks like the below.

labenv.PNG

It has three VNets in a hub and spoke architecture where the shared VNet is peered to both the app1 and app2 VNet.  The shared VNet contains a single VM named dc1 acting as a domain controller for a Windows Active Directory forest named journeyofthegeek.com.  Each spoke VNet is configured to push the IP of dc1 (10.100.4.10) to the VMs within the VNet as the DNS server.  The VMs in each spoke are domain-joined.  I’ve also created multiple Azure Private Zones as seen in the table in the diagram.  The shared VNet has been linked to all the zones for resolution.  Each spoke VNet is linked to a zone for registration and resolution.

The DNS Server service running on dc1 has been configured to forward all traffic outside of its domain to Google’s public DNS servers .  It also has multiple conditional forwarders configured to send traffic for any of the Azure Private DNS zones to the 168.63.129.16 virtual IP.  I’ve created a single A record in the appzone.com named www and assigned it the IP of the app1 server (10.102.0.10) in the app1 VNet.

If you take a look below at each of the Azure Private DNS zones assigned to the spokes, you can see that the VMs in each spoke has automatically registered an A record for itself with its associated zone.  Take note that this happened even though each VM is configured to use dc1 as a DNS server.  This is the magic of the cloud platform where the platform itself took care of registration of the records.

app1zone

app1zone.com Private DNS Zone

app2zone

app2zone.com Private DNS Zone

When a VM needs to perform DNS resolution, it sends that DNS query to dc1.  It then sends a DNS query to Azure DNS services via the 168.63.129.16 virtual IP for resolution of the Azure Private zones (red line) that it has been linked to.  Resolution of records in other domains are sent out to the Internet (blue).  The traffic flows is illustrated in the diagram below:

stddnsreso.PNG

There are a few benefits to this pattern introduces.  One benefit is it addresses a few of the gaps in Azure Private DNS, namely no conditional forwarding and no query logging.

With no support for conditional forwarding, any VMs you set to use the Azure DNS servers through the 168.63.129.16 virtual IP will only be able to resolve namespaces Azure DNS is aware of.  Since Azure DNS has no awareness of DNS zones running on the domain controller, we’d be out of luck if we needed to use any domain services.  This problem extends to any DNS zone you’re running on DNS equipment that isn’t resolvable from the Internet.  Yep, this means no hybrid workloads over your private connection back to your on-prem or colo datacenter.  The conditional forwarder capability on the BYODNS service allow us to resolve the problem and additionally get the queries to Azure DNS when it’s called for.

The other limitation is DNS query logging.  As I’ve mentioned before, DNS query logs are excellent inputs to any organization’s behavior analytics to help detect threats in the environment.  That log data is that much more important when you move into the cloud, because it helps mitigate the risks of the additional freedoms you’ll be giving application owners and developers to spin up their own resources.  By introducing a BYODNS service, we capture that log data.

I fully expect both of these features to eventually make their way into the service.  Until that time, the BYODNS pattern demonstrated above can help address the gaps.

You may be asking yourself, “If I have to BYODNS, what does Azure Private DNS get me?” Excellent question.  The answer is it can provide self-service, agility, reduce overhead, and mitigate risk.  How does it do these things, let me count the ways:

  1. In most organizations, DNS is managed by a central IT group.  This means application owners and developers have to submit request and wait for those requests to be completed.  Wouldn’t it be great to let them perform the updates themselves on a zone they own?
  2. Azure Private DNS is available over a modern REST API.  Yes yes, I know you are a scripting ninja and have a 100 PowerShell and Bash scripts available at your fingertips, but show me a developer in 2019 who wants to write anything in those languages when a REST option is available.
  3. Managing multiple DNS zones and associated records on BYODNS equipment can require significant overhead in both staff and hardware.  This sometimes drives organizations to support fewer zones which increases the risks of changes to the zone affecting applications.  By incorporating Azure Private DNS into the mix, you can reduce the overhead of BYODNS (think of how much more when logging and conditional forwarders are introduced) by letting each business unit own a zone (i.e. marketing.journeyofthegeek.com, hr.journeyofthegeek.com, etc).
  4. Show me someone who been in operations that hasn’t had a major outage caused by what should have been a simple DNS change.  No?  I didn’t think so.  By giving each BU its own Azure Private DNS zone, you limit the blast radius of a bad change to BU1 affecting BU2.  Since each zone is different resource in Azure, you can additionally wrap an authorization boundary around that resource limiting employees to only the zones they need to administer.

Once you have the above pattern in place, you can easily expand upon it providing DNS resolution from on-premises VMs to Azure and vice versa.  You can Setup the appropriate connection between Azure and your on-premises (S2S, ExpressRoute), put in the appropriate conditional forwarders on both ends, and you’re good to go!  Again, expect this to be easier as the service matures if conditional forwarders and a PrivateLink endpoint for the service are introduced.

Well folks, that will wrap up the series.  The key things I want you to take away from this is that Azure Private DNS isn’t in a state where it can replace a mature DNS implementation (I fully expect that to change over time).  Instead, you will want to use to to supplement your existing DNS implementation to reduce overhead, increase agility of application owners and developers, and yes even mitigate a bit risk in the process.

For those of you who will be stuffing themselves with turkey, stuffing, and mashed potatoes this week, have a wonderful Thanksgiving!

 

DNS in Microsoft Azure – Part 2

DNS in Microsoft Azure – Part 2

Welcome back fellow geek to part two of my series on DNS in Azure.  In the first post I covered some core concepts behind the DNS offerings in Azure.  One of the core concepts was the 168.63.129.16 virtual IP address which acts the communication point when Azure services within a VNet need to talk to Azure DNS resolver.  If you’re unfamiliar with it, circle back and read that portion of the post.  I also covered the basic DNS offering, Azure-provided DNS.  For this post I’m going to cover the newly minted Azure Private DNS service.

As I covered my last post, Azure-provided DNS is a decent service if you’re doing some very basic proof-of-concept testing, but not much use beyond that.  The limited capabilities around record types, scale challenges for BYODNS when requiring resolution across multiple VNets, and no reverse DNS support typically have required an enterprise BYODNS solution.  This meant organizations were stuck purchasing expensive NVAs or rolling VMs running BIND or Windows DNS Server.  Beyond the overhead of having to manage all aspects of the VM we’re all familiar with, it also brings along legacy request and change management processes.  In most enterprises application owners have to submit requests to central IT to have DNS entries created or modified.  This is counter to the goal of empowering application owners to be more agile.

Thankfully, Microsoft heard the cries of application owners and central IT and introduced Azure Private DNS into public preview back in early 2018.  After a few iterations and improvements, the service officially went general availability just last month.  The service addresses many of the gaps Azure-provided DNS has such as:

  • Support for custom DNS namespaces
  • Support for all common DNS record types such as A, MX, CNAME, PTR
  • Support for reverse DNS
  • Automatic lifecycle management of VM DNS records
  • Resolution across multiple VNets

Before we jump into the weeds, we’ll first want to cover the basic concepts of the service.  Azure Private DNS zones are an Azure resource under the namespace of /providers/Microsoft.Network/privateDnsZones/.  Each DNS zone you want to create is represented as a separate resource.  Zones created in one subscription can be consumed in another subscription as long as they’re within the same Azure AD tenant.  VNets can resolve and register DNS records with the zones you create after you “link” the VNet to the zone.  Each zone can be linked to multiple VNets for registration and resolution.  On other hand, VNets can be linked to multiple zones for resolution but only one zone for registration.  Once a zone is linked to the VNet, VMs within the VNet resolve and/or register DNS records for those zones through the 168.63.129.16 virtual IP.

I’ll quickly cover the reverse lookup zone capability that comes along with using the service. When a VNet is linked to a zone for registration there is reverse lookup zone created for the VNet.  VMs created in subnets within that VNet will register a PTR record for its FQDN of the private zone as well as a PTR record for FQDN of internal.cloudapp.net zone.  Take note that records in the reverse lookup zone will only be resolvable by VMs within that VNet when sent through the 168.63.129.16 virtual IP.

In the image below VNet1 is linked to an Azure Private Zone for both resolution and registration.  VNet2 is registered to a different Azure Private Zone for both resolution and registration.  Both VNets are configured to use Azure DNS servers.  In this scenario, Server1 will be able to perform a reverse lookup for the IP address of Server2 because it is within the same VNet.  However, Server3 will not be able to perform a reverse lookup for Server2 because it is in a different VNet.

Picture of reverse DNS lookup flows

Reverse DNS Lookups with Azure Private DNS

In addition to PTR records, the VMs also register A records for the private zone and the Azure-provided DNS zone.  There are a few things to note about the A records automatically created in the private zone:

  • Each record has a property called isAutoRegistered which has a boolean value of true for any records created through the auto-registration process.
  • Auto-registered records have an extremely short TTL of 10 seconds.  If you have plans of performing DNS scavenging, take note of this and that these records are automatically deleted when the VM is deleted.
Private DNS zone viewed in portal

Portal View of Private DNS Zone

The Azure-provided DNS zone dynamically created for the VNet is still created even when linking an Azure Private DNS zone to a VNet.  Additionally, if you try to resolve the IP address using a single label hostname, you’ll get back the A record for the Azure-provided DNS zone.  This is by design and allows you to control the DNS suffix automatically appended by your VMs.  It also means you need to use the FQDN in any application configuration to ensure the record is resolved correctly.

dnsquery.PNG

Let’s now look at resolution between two VNets.  In this scenario we again have VNet1 and VNet2.  VNet1 is linked for both registration and resolution to the Azure Private DNS Zone of app1zone.com.  VNet2 is linked for just resolution to the app1zone.com.  VMs in VNet2 are able to resolve queries for the fully-qualified domain name of VMs in VNet1 as illustrated in the diagram below.

Image showing DNS resolution between two VNets where both are linked for resolution

DNS Resolution between two VNets

Beyond the auto-registration of records, you can also manually create a variety of record types as I mentioned above. There isn’t anything special or different in the way Azure is handling these records.  The only thing worth noting is the records have a standard 1 hour TTL.

There are two significant limitations in the service right now.  One of those limitations is no support for query logging.  Given how important DNS query logging data can be as data points to identifying threats in the environment, your organization may require this.  If so, you’ll need to insert some BYODNS into the mix (I’ll cover that pattern next post).  The other bigger and more critical limitation is the lack of support for conditional forwarding.  As of today, you can’t create conditional forwarders for the service which will prevent you from forwarding queries from the 168.63.129.16 virtual IP to other DNS services you may have running for resolution of other resources such as on-premises resources.  Again, the workaround here is BYODNS.  Expect both of these limitations to be addressed in time as the service matures.

Azure Private DNS alone is a great service if your organization is completely in the cloud and has basic DNS resolution needs.  Some patterns you could leverage here:

  • Separate private DNS zone for each application –  In this scenario you could grant your application owners full control of the zone letting them manage the records as they see fit.  This would improve the application team’s agility while reducing operational burden on central IT.
  • Separate private DNS zones for each environment (Dev/QA/Prod) – In this scenario you could avoid having to do any BYODNS if there are no dependencies on on-premises infrastructure.  You also get full lifecycle management of VM records cutting back on operational overhead.

Summing up the service:

  • Positives
    • Managed service where you don’t have to worry the managing the underlining infrastructure
    • Scalability and availability are baked into the service
    • Use of custom DNS namespaces
    • VMs spread across multiple VNets can resolve each other’s addresses
    • Reverse DNS is supported within a VNet
    • Lifecycle of the VMs DNS records are automatically managed by the platform
    • Applications could be assigned their own DNS zones and application owners delegated full control over that zone
  • Negatives
    • No support for conditional forwarders at this time
    • No support for DNS query logging at this time
    • No support for WINS or NETBIOS (although I call this a positive 🙂 )

In my next post I’ll cover how the service works with BYODNS and will discuss some neat patterns that are available when you take advantage of the service.