DNS in Microsoft Azure Part 5 – PrivateLink Endpoints and Private DNS

DNS in Microsoft Azure Part 5 – PrivateLink Endpoints and Private DNS

Updates:

7/2025 – Updated to remove mentions of DNS query logging limitations with Private DNS Resolver due to introduction of DNS Security Policies

This is part of my series on DNS in Microsoft Azure.

Hello again!

In this post I’ll be continuing my series on Azure Private Link and DNS with my 5th entry into the DNS series.  In my last post I gave some background into Private Link, how it came to be, and what it offers.  For this post I’ll be covering how the DNS integration works for Azure native PaaS services behind Private Link Private Endpoints.

Before we get into the details of how it all works let’s first look at the components that make up an Azure Private Endpoint created for an Azure native service that is integrated with Azure Private DNS. These components include (going from left to right):

  1. Virtual Network Interface – The virtual network interface (VNI) is deployed into the customer’s virtual network and reserves a private IP address that is used as the path to the Private Endpoint.
  2. Private Endpoint – The Azure resource that represents the private connectivity to the resource establishes the relationships to the other resources.
  3. Azure PaaS Service Instance – This could be a customer’s instance of an Azure SQL Server, blob endpoint for a storage account, and any other Microsoft PaaS that supports private endpoints. The key thing to understand is the Private Endpoint facilitates connectivity to a single instance of the service.
  4. Private DNS Zone Group – The Private DNS Zone Group resource establishes a relationship between the Private Endpoint and an Azure Private DNS Zone automating the lifecycle of the A record(s) registered within the zone. You may not be familiar with this resource if you’ve only used the Azure Portal.
  5. Azure Private DNS Zone – Each Azure PaaS service has a specific namespace or namespaces it uses for Private Endpoints.
Azure Private Endpoint and DNS integration components

An example of the components involved with a Private Endpoint for the blob endpoint for an Azure Storage Account would be similar to what pictured below.

Example of components for blob endpoint of Azure Storage Account

I’ll now walk through some scenarios to understand how these components work together.

Scenario 1 – Default DNS Pattern Without Private Link Endpoint with a single virtual network

DNS resolution without a Private Endpoint

In this example an Azure virtual machine needs to resolve the name of an Azure SQL instance named db1.database.windows.net. No Private Endpoint has been configured for the Azure SQL instance and the VNet is configured to use the 168.63.129.16 virtual IP and Azure-provided DNS. 

The query resolution is as follows:

  1. VM1 creates a DNS query for db1.database.windows.net. VM1 does not have a cached entry for it so the query is passed on to the DNS Server configured for the operating system. The virtual network DNS Server settings has be set to to the default of the virtual IP of 168.63.129.16 and pushed to the VNI by the Azure DHCP Service . The recursive query is sent to the virtual IP and passed on to the Azure-provided DNS service.
  2. The Azure-provided DNS services checks to see if there is an Azure Private DNS Zone named database.windows.net linked to the virtual network. Once it validates it does not, the recursive query is resolved against the public DNS namespace and the public IP 55.55.55.55 of the Azure SQL instance is returned.

Scenario 2 – DNS Pattern with Private Link Endpoint with a single virtual network

DNS Resolution with a Private Endpoint

In this example an Azure virtual machine needs to resolve the name of an Azure SQL instance named db1.database.windows.net. A Private Endpoint has been configured for the Azure SQL instance and the VNet is configured to use the 168.63.129.16 virtual IP which will use Azure-provided DNS. An Azure Private DNS Zone named privatelink.database.windows.net has been created and linked to the machine’s virtual network. Notice that a new CNAME has been created in public DNS named db1.privatelink.database.windows.net.

The query resolution is as follows:

  1. VM1 creates a DNS query for db1.database.windows.net. VM1 does not have a cached entry for it so the query is passed on to the DNS Server configured for the operating system. The virtual network DNS Server settings has be set to to the default of the virtual IP of 168.63.129.16 and pushed to the VNI by the Azure DHCP Service . The recursive query is sent to the virtual IP and passed on to the Azure-provided DNS service.
  2. The Azure-provided DNS services checks to see if there is an Azure Private DNS Zone named database.windows.net linked to the virtual network. Once it validates it does not, the recursive query is resolved against the public DNS namespace. During resolution the CNAME of privatelink.database.windows.net is returned. The Azure-provided DNS service checks to see if there is an Azure Private DNS Zone named privatelink.database.windows.net linked to the virtual network and determines there is. The query is resolved to the private IP address of 10.0.2.4 of the Private Endpoint.

Scenario 2 Key Takeaway

The key takeaway from this scenario is the Azure-provided DNS service is able to resolve the query to the private IP address because the virtual network zone link is established between the virtual network and the Azure Private DNS Zone. The virtual network link MUST be created between the Azure Private DNS Zone and the virtual network where the query is passed to the 168.63.129.16 virtual IP. If that link does not exist, or the query hits the Azure-provided DNS service through another virtual network, the query will resolve to the public IP of the Azure PaaS instance.

Great, you understand the basics. Let’s apply that knowledge to enterprise scenarios.

Scenario 3 – Azure-to-Azure resolution of Azure Private Endpoints

First up I will cover resolution of Private Endpoints within Azure when it is one Azure service talking to another in a typical enterprise Azure environment with a centralized DNS service.

Scenario 3a- Azure-to-Azure resolution of Azure Private Endpoints with a customer-managed DNS service

Azure resolution of Azure Private Endpoints using customer-managed DNS service

First I will cover how to handle this resolution using a customer-managed DNS service running in Azure. Customers may choose to do this over the Private DNS Resolver pattern because they have an existing 3rd-party DNS service (InfoBlox, BlueCat, etc) they already have experience on.

In this scenario the Azure environment has a traditional hub and spoke where there is a transit network such as a VWAN Hub or a traditional virtual network with some type of network virtual appliance handling transitive routing. The customer-managed DNS service is deployed to a virtual network peered with the transit network. The customer-managed DNS service virtual network has a virtual network link to the Private DNS Zone for privatelink.database.windows.net namespace. An Azure SQL instance named db1.database.windows.net has been deployed with a Private Endpoint in a spoke virtual network. An Azure VM has been deployed to another spoke virtual network and the DNS server settings of the virtual network has been configured with the IP address of the customer-managed DNS service.

Here, the VM running in the spoke is resolving the IP address of the Azure SQL instance private endpoint.

The query resolution path is as follows:

  1. VM1 creates a DNS query for db1.database.windows.net. VM1 does not have a cached entry for it so the query is passed on to the DNS Server configured for the operating system. The virtual network DNS Server settings has be set to 10.1.0.4 which is the IP address of the customer-managed DNS service and pushed to the virtual network interface by the Azure DHCP Service . The recursive query is passed to the customer-managed DNS service over the virtual network peerings.
  2. The customer-managed DNS service receives the query, validates it does not have a cached entry and that it is not authoritative for the database.windows.net namepsace. It then forwards the query to its standard forwarder which has been configured to the be the 168.63.129.16 virtual IP address for the virtual network in order to pass the query to the Azure-provided DNS service.
  3. The Azure-provided DNS services checks to see if there is an Azure Private DNS Zone named database.windows.net linked to the virtual network. Once it validates it does not, the recursive query is resolved against the public DNS namespace. During resolution the CNAME of privatelink.database.windows.net is returned. The Azure-provided DNS service checks to see if there is an Azure Private DNS Zone named privatelink.database.windows.net linked to the virtual network and determines there is. The query is resolved to the private IP address of 10.0.2.4 of the Private Endpoint.

Scenario 3b – Azure-to-Azure resolution of Azure Private Endpoints with the Azure Private DNS Resolver

Azure resolution of Azure Private Endpoints using Azure Private DNS Resolver

In this scenario the Azure environment has a traditional hub and spoke where there is a transit network such as a VWAN Hub or a traditional virtual network with some type of network virtual appliance handling transitive routing. An Azure Private DNS Resolver inbound and outbound endpoint has been deployed into a shared services virtual network that is peered with the transit network. The shared services virtual network has a virtual network link to the Private DNS Zone for privatelink.database.windows.net namespace. An Azure SQL instance named db1.database.windows.net has been deployed with a Private Endpoint in a spoke virtual network. An Azure VM has been deployed to another spoke virtual network and the DNS server settings of the virtual network has been configured with the IP address of the Azure Private DNS Resolver inbound endpoint IP.

Here, the VM running in the spoke is resolving the IP address of the Azure SQL instance private endpoint.

  1. VM1 creates a DNS query for db1.database.windows.net. VM1 does not have a cached entry for it so the query is passed on to the DNS Server configured for the operating system. The virtual network DNS Server settings has be set to 10.1.0.4 which is the IP address of the Azure Private DNS Resolver Inbound Endpoint IP and pushed to the virtual network interface by the Azure DHCP Service . The recursive query is passed to the Azure Private DNS Resolver Inbound Endpoint via the virtual network peerings.
  2. The inbound endpoint receives the query and passes it into the virtual network through the outbound endpoint which passes it on to the Azure-provided DNS service through the 168.63.129.16 virtual IP.
  3. The Azure-provided DNS services checks to see if there is an Azure Private DNS Zone named database.windows.net linked to the virtual network. Once it validates it does not, the recursive query is resolved against the public DNS namespace. During resolution the CNAME of privatelink.database.windows.net is returned. The Azure-provided DNS service checks to see if there is an Azure Private DNS Zone named privatelink.database.windows.net linked to the virtual network and determines there is. The query is resolved to the private IP address of 10.0.2.4 of the Private Endpoint.

Scenario 3 Key Takeaways

  1. When using the Azure Private DNS Resolver, there are a number of architectural patterns for both the centralized model outlined here and a distributed model. You can reference this post for those details.
  2. It’s not necessary to link the Azure Private DNS Zone to each spoke virtual network as long as you have configured the DNS Server settings of the virtual network to the IP address of your centralized DNS service which should be running in a virtual network which has virtual network links to all of the Azure Private DNS Zones used for PrivateLink.

Scenario 4 – On-premises resolution of Azure Private Endpoints

Let’s now take a look at DNS resolution of Azure Private Endpoints from on-premises machines. As I’ve covered in past posts Azure Private DNS Zones are only resolvable using the Azure-provided DNS service which is only accessible through the 168.63.129.16 virtual IP which is not reachable outside the virtual network. To solve this challenge you will need an endpoint within Azure to proxy the DNS queries to the Azure-provided DNS service and connectivity from-premises into Azure using Azure ExpressRoute or a VPN.

Today you have two options for the DNS proxy which include bringing your own DNS service or using the Azure Private DNS Resolver. I’ll cover both for this scenario.

Scenario 4a – On-premises resolution of Azure Private Endpoints using a customer-managed DNS Service

On-premises resolution of Azure Private Endpoints using customer-managed DNS service

In this scenario the Azure environment has a traditional hub and spoke where there is a transit network such as a VWAN Hub or a traditional virtual network with some type of network virtual appliance handling transitive routing. The customer-managed DNS service is deployed to a virtual network peered with the transit network. The customer-managed DNS service virtual network has a virtual network link to the Private DNS Zone for privatelink.database.windows.net namespace. An Azure SQL instance named db1.database.windows.net has been deployed with a Private Endpoint in a spoke virtual network.

An on-premises environment is connected to Azure using an ExpressRoute or VPN. The on-premises DNS service has been configured with a conditional forwarder for database.windows.net which points to the customer-managed DNS service running in Azure.

The query resolution path is as follows:

  1. The on-premises machine creates a DNS query for db1.database.windows.net. After validating it does not have a cached entry it sends the DNS query to the on-premises DNS server which is configured as its DNS server.
  2. The on-premises DNS server receives the query, validates it does not have a cached entry and that it is not authoritative for the database.windows.net namespace. It determines it has a conditional forwarder for database.windows.net pointing to 10.1.0.4 which is the IP address of the customer-managed DNS service running in Azure. The query is recursively passed on to the customer-managed DNS service via the ExpressRoute or Site-to-Site VPN connection
  3. The customer-managed DNS service receives the query, validates it does not have a cached entry and that it is not authoritative for the database.windows.net namepsace. It then forwards the query to its standard forwarder which has been configured to the be the 168.63.129.16 virtual IP address for the virtual network in order to pass the query to the Azure-provided DNS service.
  4. The Azure-provided DNS services checks to see if there is an Azure Private DNS Zone named database.windows.net linked to the virtual network. Once it validates it does not, the recursive query is resolved against the public DNS namespace. During resolution the CNAME of privatelink.database.windows.net is returned. The Azure-provided DNS service checks to see if there is an Azure Private DNS Zone named privatelink.database.windows.net linked to the virtual network and determines there is. The query is resolved to the private IP address of 10.0.2.4 of the Private Endpoint.

Scenario 4b – On-premises resolution of Azure Private Endpoints using Azure Private DNS Resolver

On-premises resolution of Azure Private Endpoints using Azure Private DNS Resolver

Now let me cover this pattern when using the Azure Private DNS Resolver. I’m going to assume you have some basic knowledge of how the Azure Private DNS Resolver works and I’m going to focus on the centralized model. If you don’t have baseline knowledge of the Azure Private DNS Resolver or you’re interested in the distributed mode and the pluses and minuses of it, you can reference this post.

In this scenario the Azure environment has a traditional hub and spoke where there is a transit network such as a VWAN Hub or a traditional virtual network with some type of network virtual appliance handling transitive routing. The Private DNS Resolver is deployed to a virtual network peered with the transit network. The Private DNS Resolver virtual network has a virtual network link to the Private DNS Zone for privatelink.database.windows.net namespace. An Azure SQL instance named db1.database.windows.net has been deployed with a Private Endpoint in a spoke virtual network.

An on-premises environment is connected to Azure using an ExpressRoute or VPN. The on-premises DNS service has been configured with a conditional forwarder for database.windows.net which points to the Private DNS Resolver inbound endpoint.

The query resolution path is as follows:

  1. The on-premises machine creates a DNS query for db1.database.windows.net. After validating it does not have a cached entry it sends the DNS query to the on-premises DNS server which is configured as its DNS server.
  2. The on-premises DNS server receives the query, validates it does not have a cached entry and that it is not authoritative for the database.windows.net namespace. It determines it has a conditional forwarder for database.windows.net pointing to 10.1.0.4 which is the IP address of the inbound endpoint for the Azure Private DNS Resolver running in Azure. The query is recursively passed on to the inbound endpoint over the ExpressRoute or Site-to-Site VPN connection
  3. The inbound endpoint receives the query and passes it into the virtual network through the outbound endpoint which passes it on to the Azure-provided DNS service through the 168.63.129.16 virtual IP.
  4. The Azure-provided DNS services checks to see if there is an Azure Private DNS Zone named database.windows.net linked to the virtual network. Once it validates it does not, the recursive query is resolved against the public DNS namespace. During resolution the CNAME of privatelink.database.windows.net is returned. The Azure-provided DNS service checks to see if there is an Azure Private DNS Zone named privatelink.database.windows.net linked to the virtual network and determines there is. The query is resolved to the private IP address of 10.0.2.4 of the Private Endpoint.

Scenario 4 Key Takeaways

The key takeaways from this scenario are:

  1. You must setup a conditional forwarder on the on-premises DNS server for the PUBLIC namespace of the service. While using the privatelink namespace may work with your specific DNS service based on how the vendor has implemented, Microsoft recommends using the public namespace.
  2. Understand the risk you’re accepting with this setup. All DNS resolution for the public namespace will now be sent up to the Azure Private DNS Resolver or customer-managed DNS service. If your connectivity to Azure goes down, or those DNS components are unavailable, your on-premises endpoints may start having failures accessing websites that are using Azure services (think images being pulled from an Azure storage account).
  3. If your on-premises DNS servers use non-RFC1918 address space, you will not be able to use scenario 3b. The Azure Private DNS Resolver inbound endpoint DOES NOT support traffic received from non-RFC1918 address space.

Other Gotchas

Throughout these scenarios you have likely observed me using the public namespace when referencing the resources behind a Private Endpoint (example: using db1.database.windows.net versus using db1.privatelink.database.windows.net). The reason for doing this is because the certificates for Azure PaaS services does not include the privatelink namespace in the certificate provisioned to the instance of the service. There are exceptions for this, but they are few and far between. You should always use the public namespace when referencing a Private Endpoint unless the documentation specifically tells you not to.

Let me take a moment to demonstrate what occurs when an application tries to access a service behind a Private Endpoint using the PrivateLink namespace. In this scenario there is a virtual machine which has been configured with proper resolution to resolve Private Endpoints to the appropriate Azure Private DNS Zone.

Resolution of Private Endpoint to private IP address

Now I’ll attempt to make an HTTPS connection to the Azure Key Vault instance using the PrivateLink namespace of privatelink.vaultcore.azure.net. In the image below you can see the error returned states the PrivateLink namespace is not included in the subject alternate name field of the certificate presented by the Azure Key Vault instance. What this means is the client can’t verify the identity of the server because the identities presented in the certificate doesn’t match the identity that was requested. You’ll often see this error as a certificate name mismatch in most browsers or SDKs.

Certificate name mismatch error

Final Thoughts

There are some key takeaways for you with this post:

  1. Know your DNS resolution path. This is absolutely critical when troubleshooting Private Endpoint DNS resolution.
  2. Always check your zone links. 99% of the time you’re going to be used the centralized model for DNS described in this post. After you verify your DNS resolution path, validate that you’ve linked the Private DNS Zone to your DNS Server / Azure Private DNS Resolver virtual network.
  3. Create your on-premises conditional forwarders for the PUBLIC namespaces for Azure PaaS services, not the Private Link namespace.
  4. Call your services behind Private Endpoints using the public hostname not the Private Link hostname. Using the Private Link hostname will result in certificate mismatches when trying to establish secure sessions.
  5. Don’t go and link your Private DNS Zone to every single virtual network. You don’t need to do this if you’re using the centralized model. There are very rare instances where the zone must be linked to the virtual network for some deployment check the product group has instituted, but that is rare.
  6. Centralize your Azure Private DNS Zones in a single subscription and use a single zone for each PrivateLink service across your environments (prod, non-prod, test, etc). If you try to do different zones for different environments you’re going to run into challenges when providing on-premises resolution to those zones because you now have two authorities for the same namespace.

Before I close out I want to plug a few other blog posts I’ve assembled for Private Endpoints which are helpful in understanding the interesting way they work.

  • This post walks through the interesting routes Private Endpoints inject in subnet route tables. This one is important if you have a requirement to inspect traffic headed toward a service behind a Private Endpoint.
  • This post covers how Network Security Groups work with Private Endpoints and some of the routing improvements that were recently released to help with inspection requirements around Private Endpoints.

Thanks!

DNS in Microsoft Azure Part 4 – Private Link DNS

DNS in Microsoft Azure Part 4 – Private Link DNS

Updated October 2023

This is part of my series on DNS in Microsoft Azure.

Hi there geeks!

Azure Private Link is a common topic for customers. It provides a service provider with the ability to inject a path to the instance of their service into a customer’s virtual network. It should come as no surprise that Microsoft makes full use this service to provide its customers with the same capability for it’s PaaS services. While the DNS configuration is straightforward for 3rd-party-provided Private Link services, there is some complexity to DNS when using it for access to Microsoft PaaS services behind a Private Link Private Endpoint.

Before I dive into the complexities of DNS, I want to cover how and why the service came to be.

The service was introduced in September 2019.  One of the primary drivers behind the introduction of the service was to address the customer demand for secure and private connectivity to native Azure services and 3rd-party services.  Native Azure PaaS services used to be accessible only via public IP addresses which required traffic to traverse the public Internet. If the customer wanted the traffic to take a known path to that public IP and have some assurance of consistency in the latency, the customer was forced to implement ExpressRoute with Microsoft Peering (formerly known as ExpressRoute Public Peering) which can be complex and comes with lots of considerations.

Microsoft first tried to address this technical challenge with Service Endpoints, which were introduced in February 2018.  For you AWS folk, the Service Endpoints are probably closest to VPC Gateway Endpoints.  Service Endpoints provide a means for services deployed with the virtual network to access an Azure PaaS service directly over the Microsoft backbone while also tagging the traffic egressing the Service Endpoint with the virtual network identity. The customer’s instance of that PaaS service could then be locked down access from a specific virtual network.

Service Endpoints
Access to public IP of Azure PaaS services

Service Endpoints came with a few gaps. One gap was Service Endpoints are not routable from on-premises so machines coming from on-premises can’t benefit from their deployment. The other gap was a Service Endpoint creates a data exfiltration risk because it is a more efficient route to ALL instances of the Azure PaaS service. Customers were often implementing Service Endpoints directly on the compute subnets which would cause traffic to bypass an organization’s security appliance (think traditional hub-and-spoke) making that exfiltration risk that much greater.

Microsoft made an attempt to mitigate the exfiltration risk Service Endpoint policies which are similar to VPC Gateway Endpoint Policies in controls could be applied via the policy to limit which instances of a PaaS service the Service Endpoint would allow traffic to. Unfortunately, Service Endpoint Policies never seemed to catch on and they are limited to Azure Storage.

Microsoft needed a solution to these two gaps and that’s how Azure Private Link came to be. Azure Private Link includes the concept of an Azure Private Link Service and Private Link Endpoint. I won’t be digging into the details of the Private Link Service component, because the focus of this two-part series is on DNS and its role in providing name resolution for Microsoft native PaaS Private Link Private Endpoints. I do want to cover the benefits Private Link brings to the table because it will reinforce why it’s so important to under the DNS integration.

Private Link addresses the major gaps in Service Endpoints by doing the following:

  • Private access to services running on the Azure platform through the provisioning of a virtual network interface within the customer VNet that is assigned one of the VNet IP addresses from the RFC1918 address space.
  • Makes the services routable and accessible over private IP space to resources running outside of Azure such as machines running in an on-premises data center or virtual machines running in other clouds.
  • Protects against data exfiltration by the Private Endpoint providing access to only a specific instance of a PaaS service.
Azure Private Link
Azure Private Link architecture

Now that you understand what Private Link brings to the table, I’ll focus on the DNS integration required to support Azure native PaaS services deployed behind a Private Link Private Endpoint.

The series is continued in my second post.

DNS in Microsoft Azure Part 3 – Azure Private DNS Resolver

DNS in Microsoft Azure Part 3 – Azure Private DNS Resolver

Updates:

7/2025 – Updated post with DNS Security Policy support for DNS query logging. Tagged scenario 4 as deprecated. Corrected image and description of flow in 3b

This is part of my series on DNS in Microsoft Azure.

Today I’ll be continuing my series on DNS in Microsoft Azure.  In my first post I covered fundamental concepts of DNS resolution in Azure such as the 168.63.129.16 virtual IP and Azure-provided DNS.  In the second post I went over the Azure Private DNS service and it’s benefits over the default virtual network namespaces. In this post I’m going to cover the Azure Private DNS Resolver.

A majority of organizations have an existing on-premises IT footprint. During the move into Azure, these organizations need to support communication between users and applications on-premises or in other clouds with services deployed into Azure. An important piece of this communication includes name resolution of DNS namespaces hosted on-premises and DNS namespaces hosted in Azure Private DNS Zones. This is where the Azure Private DNS Resolver comes in.

The Azure Private DNS Resolver was introduced into general availability in October 2022. It was developed to address two gaps of the Azure-provided DNS service. As I’ve covered in my prior posts the Azure-provided DNS services does not support resolution of DNS namespaces hosted in on-premises or other cloud DNS services when the DNS query is sourced from Azure. Neither does it support resolution of DNS namespaces hosted in Azure Private DNS Zones from machines on-premises or in another cloud without the customer implementing a 3rd-party DNS proxy. The Azure Private DNS Resolver fills both gaps without the customer having to implement a 3rd-party DNS proxy.

The Resolver (Azure Private DNS Resolver) consists of three different components which include inbound endpoints, outbound endpoints, and forwarding rule sets. Inbound endpoints provide a routable IP address that services running on-premises, in another cloud, or even in Azure can communicate with to resolve DNS namespaces hosted in Azure Private DNS Zones. Outbound endpoints provide a network egress point for DNS traffic to external DNS services running on-premises or within Azure. Forwarding rulesets are groups of DNS forwarding rules (conditional forwarders) that give direction to DNS traffic leaving a virtual network through the 168.63.129.16 virtual IP.

Let’s take a look at a few different scenarios and how this all works together.

Scenario 1 – On-premises machine needs to resolve an Azure Virtual Machine IP address where the DNS namespace is hosted in an Azure Private DNS Zone.

In this scenario an Azure Private DNS Resolver instance has been deployed a shared services virtual network. An Azure Private DNS Zone named mydomain.com has been linked to the virtual network. Connectivity to on-premises has been implemented using either an ExpressRoute or VPN connection. The on-premises DNS service has been configured with a conditional forwarder for mydomain.com with queries being sent to the inbound endpoint IP address at 10.1.0.4.

Let’s look at the steps that are taken for an on-premises machine to resolve the IP address of vm1.mydomain.com.

  1. The on-premises machine creates a DNS query for vm1.mydomain.com after validating it does not have a cached entry. The machine has been configured to use the on-premises DNS server at 192.168.0.10 as its DNS server. The DNS query is passed to the on-premises DNS server.
  2. The on-premises DNS server receives the query, validates it does not have a cached entry and that it is not authoritative for the mydomain.com namespace. It determines it has a conditional forwarder for mydomain.com pointing to 10.1.0.4 which is the IP address of the inbound endpoint for the Azure Private DNS Resolver running in Azure. The query is recursively passed on to the inbound endpoint over the ExpressRoute or Site-to-Site VPN connection.
  3. The inbound endpoint receives the query and recursively passes it into the virtual network through the outbound endpoint which passes it on to the Azure-provided DNS service through the 168.63.129.16 virtual IP.
  4. The Azure-provided DNS service determines it has an Azure Private DNS Zone linked to the shared services virtual network for mydomain.com and resolves the hostname to its IP address of 10.0.0.4.

Scenario 2 – Azure virtual machine needs to resolve an on-premises service IP address where the DNS namespace is hosted on an on-premises DNS server

There are two approaches to using the Resolver as a DNS resolution service for Azure services. There is a centralized architecture and distributed architecture. My peer Adam Stuart has done a wonderful analysis of the benefits and considerations of these two patterns. You will almost always use the centralized architecture. The exceptions will be for workloads that have a high number of DNS queries such as VDI. Both the inbound and outbound endpoints endpoints have a limit to the queries per second (QPS) so by using a decentralized architecture for resolution of on-premises namespaces you can mitigate the risk of hitting the QPS on the inbound endpoint. I suggest reading Adam’s post, it has some great details.

Scenario 2a – Centralized architecture for connected virtual networks

Let me first cover the centralized architecture because it’s the more common architecture and will work for most use cases.

Centralized architecture for resolution of on-premises DNS namespaces

In the centralized architecture all virtual networks in the environment have network connectivity to a shared services virtual network through direct or indirect (such as Azure Virtual WAN or a traditional hub and spoke) virtual network peering. The Resolver and its endpoints are deployed to the shared services virtual network, a DNS Forwarding Rule Set is linked to the shared services virtual network, and it has connectivity back on-premises through an ExpressRoute or VPN connection. This architecture centralizes all DNS queries across your Azure environment pushing them through the inbound endpoint (beware of those QPS limits!).

In the example scenario above, a rule has been configured in the DNS Forwarding Rule Set to forward traffic destined for the onpremises.com domain to the on-premises DNS service at 192.168.1.10. The on-premises DNS service is authoritative for the onpremises.com domain.

Let’s look at the query path for this scenario where a VM1 in Azure is trying to resolve the IP address for an application running on-premises named service.onpremises.com where the namespace onpremises.com is hosted in an on-premises DNS server.

  1. VM1 creates a DNS query for services.onpremises.com. VM1 does not have a cached entry for it so the query is passed on to the DNS Server configured for the VMs virtual network interface (VNIC). The DNS Server has been configured by the Azure DHCP Service to the Resolver inbound endpoint with IP address 10.1.0.4 via the DNS Server settings of the virtual network. The query is passed on to the inbound endpoint.
  2. The inbound endpoint receives the query and recursively passes it into the virtual network through the outbound endpoint and on to the 168.63.129.16 virtual IP address and on to the Azure-provided DNS service. The Azure-provided DNS service checks to see if there is an Azure Private DNS Zone linked to the virtual network the resolver endpoints are in with the name of onpremises.com. Since there is not, the DNS Forwarding Rule Set linked to the virtual network is processed and the rule for onpremises.com is matched and triggered causing the recursive DNS query to be sent out of the outbound endpoint and over the ExpressRoute or VPN connection to the on-premises DNS server at 192.168.0.10.
  3. The on-premises DNS server resolves the hostname to the IP address an returns the result.

Scenario 2b – Distributed architecture for connected virtual networks

Let’s now cover the distributed architecture, which as Adam notes in his blog may be a pattern required if you’re hitting the QPS limits on the inbound endpoint.

Distributed architecture for resolution of on-premises DNS namespaces

In this distributed architecture all virtual networks in the environment have network connectivity to a shared services virtual network through direct or indirect (such as Azure Virtual WAN or a traditional hub and spoke) virtual network peering. The Resolver endpoints are deployed to the shared services virtual network which has connectivity back on-premises through an ExpressRoute or VPN connection. The workload virtual network DNS Server settings are set to the 168.63.129.16 virtual IP to use Azure-provided DNS. DNS Forwarding Rulesets are linked directly to the workload virtual networks and are configured with the necessary rules to direct DNS queries to the on-premises destinations.

In the above example, there is one rule in the DNS Forwarding Ruleset which is configured to forward DNS queries for onpremises.com to the DNS Server on-premises at 192.168.0.10.

Let’s look at the query path for this scenario where a VM1 in Azure is trying to resolve the IP address for an application running on-premises named service.onpremises.com where the namespace onpremises.com is hosted in an on-premises DNS server.

  1. VM1 creates a DNS query for services.onpremises.com. VM1 does not have a cached entry for vm3.mydomain.com so the query is passed on to the DNS Server configured for the VMs virtual network interface (VNIC). The DNS Server has been configured by the Azure DHCP Service to the 168.63.129.16 virtual IP which is the default configuration for virtual network DNS Server settings and which passes the query on to the Azure-provided DNS services.
  2. The recursive query is received by the Azure-provided DNS service and the rule for onpremises.com in the linked DNS Forwarding Ruleset is triggered passing the recursive query out of the outbound endpoint to the on-premises DNS server at 192.168.0.10. The recursive query is passed on over the ExpressRoute or Site-to-site VPN connection to the on-premises DNS server.
  3. The on-premises DNS server resolves the hostname to the IP address an returns the result.

Scenario 2c – Distributed architecture for isolated virtual networks

There is another pattern for the use of the distributed architecture which could be used for isolated virtual networks. Say for example you have a workload that needs to exist in an isolated virtual network due to a compliance requirement, but you still have a requirement to centrally manage DNS and log all queries.

Distributed architecture for isolated virtual networks for resolution of on-premises DNS namespaces

In this variation of the distributed architecture the virtual network does not have any direct connectivity to the shared service virtual network through direct or indirect peering. The DNS Forwarding Ruleset is linked to the isolated virtual network and it contains a single rule for “.” which tells the Azure-provided DNS service to forward all DNS queries to the configured IP address.

Let’s look at the resolution of a virtual machine in the isolated virtual network trying to resolve the IP address of a publicly-facing API.

  1. VM1 creates a DNS query for my-public-api.com. VM1 does not have a cached entry for vm3.mydomain.com so the query is passed on to the DNS Server configured for the VMs virtual network interface (VNIC). The DNS Server has been configured by the Azure DHCP Service to the 168.63.129.16 virtual IP which is the default configuration for virtual network DNS Server settings and which passes the query on to the Azure-provided DNS service.
  2. The recursive query is received by the Azure-provided DNS service and the rule for “.” in the linked DNS Forwarding Ruleset is triggered passing the recursive query out of the outbound endpoint to the on-premises DNS server at 192.168.0.10. The recursive query is passed on over the ExpressRoute or Site-to-site VPN connection to the on-premises DNS server.
  3. The on-premises DNS server checks its own cache, validates it’s not authoritative for the zone, and then recursively passes the query to its standard forwarder.
  4. The public DNS service resolves the hostname to an IP address.

One thing to note about this architecture is there are some reserved Microsoft namespaces that are not included in the wildcard. This means that these zones will be resolved directly by the Azure-provided DNS service in this configuration.

Scenario 3 – Azure virtual machine needs to resolve a record in an Azure Private DNS Zone.

I’ll now cover DNS resolution of namespaces hosted in Azure Private DNS Zones by compute services running in Azure. There are a number of ways to do this, so I’ll walk through a few of them.

Scenario 3a – Distributed architecture where Azure Private DNS Zones are directly link to each virtual network

One architecture a lot of customers attempt for Azure to Azure resolution is an architecture where the Azure Private DNS Zones are linked directly to each virtual network instead of a common centralized virtual network.

Distributed architecture for resolution of Azure Private DNS Zones for Azure compute with direct links

In this architecture each virtual network is configured to use the Azure-provided DNS service via a DNS Server setting on the virtual networks of the 168.63.129.16 virtual IP. The Azure Private DNS Zones are individually linked to each virtual network. Before I get into the reasons why I don’t like this pattern, let me walk through how a query is resolved in this pattern.

  1. VM2 creates a DNS query for vm1.mydomain.prod.com. VM2 does not have a cached entry for vm1.mydomain.prod.com so the query is passed on to the DNS Server configured for the VMs virtual network interface (VNIC). The DNS Server has been configured by the Azure DHCP Service to the 168.63.129.16 virtual IP which is the default configuration for virtual network DNS Server settings and which passes the query on to the Azure-provided DNS service.
  2. The recursive query is received by the Azure-provided DNS service and the service determines there is a linked Azure Private DNS Zone for the namespace mydomain.prod.com. The service resolves the hostname and returns the IP.

Alright, why don’t I like this architecture? Well, for multiple reasons:

  1. There is a limit to the number of virtual networks an Azure Private DNS Zone can be linked to. As I described in my last post, Azure Private DNS Zones should be treated as global resources and you should be using one zone per namespace and linking that zone to virtual networks in multiple regions. If you begin expanding into multiple Azure regions you could run into the limit.
  2. DNS resolution in this model gets confusing to troubleshoot because you have many links.

So yeah, be aware of those considerations if you end going this route.

Scenario 3b – Distributed architecture where Azure Private DNS Zones are linked to a central DNS resolution virtual network

This is another alternative for the distributed architecture that can be used if you want to centralize queries to address the considerations of the prior pattern (note that query logging isn’t addressed in the visual below unless you insert a customer-managed DNS service or Azure Firewall instance as I discuss later in this post).

Distributed architecture for resolution of Azure Private DNS Zones for Azure compute with central resolution

This architecture is somewhat of a combination of a distributed and centralized architecture. All virtual networks in the environment have network connectivity to a shared services virtual network through direct or indirect (such as Azure Virtual WAN or a traditional hub and spoke) virtual network peering. The Resolver and its endpoints are deployed to the shared services virtual network. All Azure Private DNS Zones are linked to the shared services virtual network. A DNS Forwarding Ruleset is linked to each workload virtual network with a single rule forwarding all DNS traffic (except the reserved namespaces covered earlier) to the Resolver inbound endpoint.

Let me walk through a scenario with this architecture where VM1 wants to resolve the IP address for the hostname vm2.mydomain.nonprod.com.

  1. VM1 creates a DNS query for vm2.mydomain.nonprod.com. VM1 does not have a cached entry for vm2.mydomain.nonprod.com so the query is passed on to the DNS Server configured for the VMs virtual network interface (VNIC). The DNS Server has been configured by the Azure DHCP Service to the 168.63.129.16 virtual IP which is the default configuration for virtual network DNS Server settings and which passes the query on to the Azure-provided DNS service.
  2. The Azure Private DNS Service checks to see if there is an Azure Private DNS Zone linked to the workload virtual network for the mydomain.nonprod.com domain and validates there is no. The service then checks the linked DNS Forwarding Rule Set linked the virtual network and finds a rule matching the domain pointing queries to the inbound endpoint IP address. The query is passed through the Resolver outbound endpoint to the inbound endpoint and back out the outbound endpoint to the 168.63.129.16 virtual IP address passing the query to the Azure-provided DNS service.

    The Azure-provided DNS service checks the shared services virtual network and determines there is a link Azure Private DNS Zone with the name mydomain.nonprod.com. The service resolves the hostname to the IP address and returns the results.

Scenario 3c – Centralized architecture where Azure Private DNS Zones are linked to a central DNS resolution virtual network

This is the more common of the Azure-to-Azure resolution architectures that I come across. Here queries are sent directly to the inbound resolver IP address via the direct or transitive connectivity to the shared services virtual network.

Centralized architecture for resolution of Azure Private DNS Zones

This architecture is centralized architecture where all virtual networks in the environment have network connectivity to a shared services virtual network through direct or indirect (such as Azure Virtual WAN or a traditional hub and spoke) virtual network peering. The Resolver and its endpoints are deployed to the shared services virtual network. All Azure Private DNS Zones are linked to the shared services virtual network. Each workload virtual network is configured with its DNS Server settings to point to the resolver’s inbound endpoint.

Let me walk through the resolution.

  1. VM1 creates a DNS query for vm2.mydomain.nonprod.com. VM1 does not have a cached entry for vm2.mydomain.nonprod.com so the query is passed on to the DNS Server configured for the VMs virtual network interface (VNIC). The DNS Server has been configured by the Azure DHCP Service to resolver’s inbound endpoint at 10.0.0.4. The query is routed over the virtual network peering to the resolver’s inbound endpoint.
  2. The inbound endpoint passes the query to the 168.64.129.16 virtual IP and onto the Azure-provided DNS Service. The service determines that an Azure Private DNS Zone named mydomain.nonprod.com is linked to the shared services virtual network. The service then resolves the hostname to the IP address and returns the results.

I’m a fan of this pattern because it’s very simple and easy to understand, which is exactly what DNS should be.

Scenario 4 – Centralized architecture that supports adding DNS query logging (Deprecated)

Now that you have an understanding of the benefits of the Azure Private DNS Resolver, let’s talk about some of the gaps. Prior to July 2025 you couldn’t achieve DNS query logging when using Azure Private DNS Resolver and Azure-provided DNS without the use of an additional server in the middle.

Centralized architecture for supporting DNS query logging

The architecture below was a common way customers addressed the gap when they had plans to eventually move fully into the Private Resolver pattern when it was introduced. DNS Security Policy was introduced in July of 2025 and cleaned up this gap making this architecture unnecessary if the additional DNS Server was solely providing DNS query logging. In this architecture all Azure Private DNS Zones and DNS Forwarding Rule Sets were linked to the shared services virtual network and a customer-managed DNS service is also deployed. All workload virtual networks have connected to the shared service through direct or indirect (Azure Virtual WAN or traditional hub and spoke) virtual network peering. Each workload virtual network has its DNS Server settings configured to use the customer-managed DNS service IP address.

Let me walk through a scenario where VM1 wants to resolve the IP address for a record in an on-premises DNS namespace.

  1. VM1 creates a DNS query for service.onpremises.com. VM1 does not have a cached entry for service.onpremises.com so the query is passed on to the DNS Server configured for the VMs virtual network interface (VNIC). The DNS Server has been configured by the Azure DHCP Service to the IP address of the customer managed DNS service at 10.1.2.4. The query is passed over the virtual network peering to the customer-managed DNS service.
  2. The customer-managed DNS service checks its local cache, validates it’s not authoritative for the zone, and passes the query on to its standard forwarder which has been configured to the resolver’s inbound endpoint at 10.1.0.4.
  3. The inbound endpoint passes the query into the virtual network out the outbound endpoint which passes it on to the 168.63.129.16 virtual IP and onto the Azure-provided DNS service. The Azure-provided DNS service checks the Azure Private DNS Zones linked to the shared services virtual network and determines that no Azure Private DNS Zones with the hostname onpremises.com are linked to it. The DNS Forwarding Rule Set linked to the virtual network is then processed and the matching rule is triggered passing the query out of the outbound endpoint over the ExpressRoute or VPN connection and to the on-premises DNS services.
  4. The on-premises DNS service checks its local cache and doesn’t find a cached entry. It then checks to see if it is authoritive for the zone, which it is and it resolves the hostname to an IP address and returns the results.

An alternative to using a customer-managed DNS service was using the Azure Firewall DNS proxy service using the pattern documented here.

The primary reason I didn’t remove this architecture completely in my July 2025 update is you’ll still see this architecture in the wild for customers that may not have fully transitioned to DNS Security Policy. Additionally, there may be use cases for using a 3rd-party DNS server to supplement gaps in Azure Private DNS Resolver such as acting as DNS cache or providing advanced DNS features such as virtualized DNS zones.

Summing it up

So yeah, there are a lot of patterns and each one has its own benefits and considerations. My recommendation is for customers to centralize DNS wherever possible because it makes for a fairly simple integration unless you have concerns over hitting QPS. If you have an edge use case for an isolated virtual network, consider the patterns referenced above.

It’s critically important to understand how DNS resolution works from a processing perspective when you have linked Azure Private DNS Zones and DNS Forwarding Rule Sets. The detail is here.

Alexis Plantin put together a great write-up with fancy animated diagrams that put my diagrams to shame. Definitely take a read through his write-up if anything to give him some traffic for creating the animated diagrams. I’m jealous!

There is some good guidance here which talks about considerations for forwarding timeouts when using a third-party DNS server that is forwarding queries to the Azure Private DNS Resolver or to Azure-provided DNS.

Lastly, let me end this with some benefits and considerations of the product.

  • Benefits
    • No infrastructure to manage. Microsoft is responsible for management of the compute powering the service.
    • Unlike Azure-provided DNS alone, it supports conditional forwarding to on-premises.
    • Unlike Azure-provided DNS alone, it supports resolution from on-premises to Azure without the need for a DNS proxy.
    • Supports multiple patterns for its implementation including centralized and decentralized architectures, even supporting isolated virtual networks.
    • DNS query logging can be achieved using DNS Security Policy as of 7/2025.
  • Considerations
    • The Private DNS Resolver MAY NOT support requests from non-RFC 1918 IP addresses to the inbound endpoint. This was a documented limitation, but has since been removed. However, customers of mine still report it does not work. If you have this use case, your best bet is to try it and open a support ticket if you have issues.
    • The Private DNS Resolver DOES NOT support iterative DNS queries. It only supports recursive DNS queries.
    • In high volume DNS environments, such as very large VDI deployments, query per second limits could be an issue.
    • The Private DNS Resolver does not support authenticated dynamic DNS updates. If you have this use case for a VDI deployment, you will need to use a DNS service that does support it.

DNS in Microsoft Azure Part 1 – Azure-provided DNS

DNS in Microsoft Azure Part 1 – Azure-provided DNS

Updates:

  • 7/2025 – Removed bullet point highlighting lack of DNS query logging and added that this can be achieved with DNS Security Policy

This is part of my series on DNS in Microsoft Azure.

Hi everyone,

In this series of posts I’m going to talk about a technology, that while old, still provides a critical foundational service.  Yes folks, we’re going to cover Domain Naming System (DNS).  Specifically, we’re going to look at how internal DNS (non-public) works in Microsoft Azure and what the positives and negatives are of each pattern.  I’m going to go into this assuming you have a basic knowledge of DNS and understand the namespaces, different record types, forward and reverse lookup zones, recursive and iterative queries, DNS forwarding and conditional forwarding, and other core DNS concepts. If those topics are unfamiliar to you, I’d suggest reading through DNS 101 by RedHat

I’m a big fan of establishing a shared vocabulary. Below I’m going to define some terms I’ll be using throughout the series.

  • A record – Resolves a hostname to an IP address such as http://www.journeyofthegeek.com to 5.5.5.5.
  • PTR Record – Resolves an IP address to a hostname.
  • CNAME record – Alias record where you can point on (FQDNs) fully qualified domain name to another to make it a domain a human can remember and for high availability configurations.
  •  Recursive Name Resolution – A DNS query where the client asks for a definitive answer to a query and relies on the upstream DNS server to resolve it to completion.  Forwarders such as Google DNS function as recursive resolvers.
  • Iterative Name Resolution – A DNS query where a client receives back a referral to another server in place of resolving the query to completion.  Querying root hints often involves iterate name resolution.
  • Standard DNS Forwarder – Forward all queries the DNS service can’t resolve to an upstream DNS service.  These upstream servers are typically configure to perform recursive or iterative name resolution.
  • Conditional Forwarder – Forward queries for a specific DNS namespace to an upstream DNS service for resolution. This is referred to as a forward zone in BIND.
  • Split-brain / Split Horizon DNS – A DNS configuration where a DNS namespace exists authoritatively across one or more DNS implementations.  A common use case is to have a single DNS namespace defined on Internet-resolvable public facing DNS servers and also on Intranet private facing DNS servers.  This allows trusted clients to reach the service via a private IP address and untrusted clients to reach the service via a public IP address.

Now that I’ve established our vocabulary for DNS, I want to cover the 168.63.129.16 address.  If you’ve ever done anything even basic in Azure, you’ve probably run into this address or used it without knowing it.  This public IP address is owned by Microsoft and is presented as a virtual IP address serving as a communication channel to the host node  for a number of platform resources.  It provides functionality such as virtual machine (VM) agent communication of the VM’s ready state, health state, enables the VM to obtain an IP address via DHCP, and you guessed it, enables the VM to leverage Azure DNS services.  The address is static and is the same for any VNet you create in every Azure region.

Traffic is routed to and from this virtual IP address through the subnet gateway.  If you run a route print on a Windows machine, you can see this route defined in the routing table of the VM.

route
Output of route print on Azure VM

The IP address is also defined in the VirtualNetwork service tag meaning the default rules within a network security group (NSG) allow this traffic to and from the VM. DNS traffic to the IP address is not filtered by NSGs by default, but you can block it with an NSG if you wish to using the instructions outlined here. You might do this if you do not want clients using the Azure platform for DNS and instead want all of these lookups to occur through another DNS mechanism such as the Azure Private DNS Resolver or a 3rd-party DNS service you have deployed.

Now that you understand what the 168.63.129.16 virtual IP address is, let’s first cover the very basics of DNS in Azure. You can configure Azure’s DHCP service to push a custom set of DNS servers to Azure resources within the virtual network or leave the default. The default DNS Server settings for a virtual network is 168.63.129.16 IP address which provides access to the Azure-provided DNS service. DNS Server settings pushed through the Azure DHCP Service can be configured at the virtual network or virtual network interface (VNI). Best practice is to configure this at the virtual network. I’ve never come across a use case to configure it at the VNI.

Configure DNS on VNet
Configure DNS Server DHCP option on VNet

This brings us to the first option for DNS resolution in Azure, Azure-provided name resolution.  Each time you spin up a virtual network Azure assigns it a unique private DNS namespace using the format <randomly generated>.internal.cloudapp.net.  This namespace is pushed to any virtual machines with VNIs in the virtual network via DHCP Option 15. An A record for each VM deployed in the virtual network is automatically registered which allows each VM the built-in ability to resolve the names of virtual machines within the same virtual network. The platform also creates PTR records in reverse lookup zones created for each of the subnets in the virtual network where VMs have VNICs in.

Let’s look at an example with a single VNet.  I’ve created a single VNet named vnet1.  I’ve assigned the CIDR block of 10.101.0.0/16 and created a single subnet assigned the 10.101.0.0/24 block.  Two Windows Server 2016 VMs have been created named azuredns and azuredns1 with the IP addresses 10.101.0.4 and 10.101.0.5.  Azure has assigned the a namespace of r0b5mqxog0hu5nbrf150v3iuuh.bx.internal.cloudapp.net to the VNet.  Note the DHCP Server and DNS Server settings in the ipconfig output of the azuredns vm shown below.

ipconfig
IPConfig output of Azure VM

If azuredns1 is pinged from azuredns you can see the in below Wireshark capture that prior to executing the ping, azuredns performs a DNS query to the 168.63.129.16 VIP and gets back a query response with the IP address of azuredns1. Pinging the single label name of the virtual machine will work as well because the Azure-provided virtual network DNS namespace is automatically prepended to the label by the operating system due to DHCP option 15 (assuming you haven’t configured the operating system to do anything different).

wireshark
Wireshark packet capture of DNS query

An example of the resolution path is diagrammed below.

In this example, the following happens:

  1. VM1 creates a DNS query for vm2 and the FQDN configured for the virtual network is automatically added to the single label resulting in a query for vm2.random.internal.cloudapp.net. VM1 does not have a cached entry for vm2.random.internal.cloudapp.net so the query is passed on to the DNS Server configured for the VMs virtual network interface (VNIC). The DNS Server has been configured by the Azure DHCP Service to the 168.63.129.16 virtual IP which is the default configuration for virtual network DNS Server settings.
  2. The DNS query is passed through the virtual IP and on to the Azure-provided DNS Service. The Azure-provided DNS Service resolves this query against the virtual network namespaces and returns the IP address for vm2.
Azure-provided DNS resolution within a virtual network

That’s all well and good for very basic DNS resolution, but who the heck has a single VNet in anything but a test environment?  So can we expand Azure-provided DNS to multiple VNets?  The answer is yes.  Recall that each VNet has its own private DNS namespace.  The only way to resolve names contained within that namespace is for a VM in that VNet to send the query to the 168.63.129.16 address.  Yes folks, this means you would need to drop a DNS server in each VNet in order to resolve the Azure-provided DNS host names assigned to VMs within that VNet by another VMs in another VNet as illustrated in the diagram below.

In this example, the following happens:

  1. VM1 creates a DNS query for vm3.random2.internal.cloudapp.net. The fully-qualified domain name (FQDN) must be provided because each virtual network uses a different randomly generated namespace. VM1 does not have a cached entry for vm3.random2.internal.cloudapp.net so the query is passed on to the DNS Server configured for the VMs virtual network interface (VNIC). Here the virtual network DNS server settings have been configured to for 10.0.2.4 which has been passed down to VM1 through the Azure DHCP Service.
  2. The DNS query arrives at DNS Server 1. DNS Server 1 does not have a cached entry. It determines it is not authoritative for the random2.internal.cloudapp.net namespace but determines it has a conditional forwarder configured for the zone pointing to 10.100.2.4. The DNS query is recursively passed on to 10.100.2.4 over the virtual network peering.
  3. The DNS query arrives at DNS Server 2. DNS Server 2 does not have a cached entry. It determines it is not authoritative for the random2.internal.cloudapp.net namespace and it does not have a conditional forwarder configured. DNS Server 2 has been configured with a standard forwarder to the 168.63.129.16 virtual IP. The query is passed on to the virtual IP and to Azure-provided DNS which resolves the query for requested record and returns the response.
Azure-provided DNS with multiple virtual networks

You can see as the number of VNets increases the scalability of this solution quickly breaks down because who the heck wants to have a DNS Server deployed to evey virtual network. Take note that if you wanted to resolve these host names from on-premises you could use a similar conditional forwarder pattern where you would pass the query to the DNS Server in Azure and on to Azure-provided DNS.

Let’s sum up the positives and negatives of Azure-provided DNS with the default virtual network namespaces..

  • Positives
    • No need to provision your own DNS servers and worry about high availability or scalability
    • DNS service provided by Azure automatically scales
    • VMs within a VNet can resolve each other’s IP addresses out of the box
    • VMs within a VNet can perform reverse lookups to get the IP address of another VM
    • DNS Query Logging is supported with use of DNS Security Policy
  • Negatives
    • Solution doesn’t scale with multiple VNets
    • You’re stuck with the namespace assigned to the VNet
    • WINS and NetBIOS are not supported
    • Only A records and PTR records that are automatically registered by the service are supported (no manual registration of records)

As you can see from the above the negatives far outweigh the positives for using the default virtual network namespaces and you’ll likely never use them. The important thing to take away from this post is an understanding of how DNS Server settings are configured and how you can configure a DNS Server to communicate with the Azure-provided DNS service. This will be relevant for everything we talk about moving forward.

In the next post l cover Azure’s new offering in the DNS space, Azure Private DNS Zones.  I’ll walk through how it works and how we can combine it with BYO DNS to create some pretty neat patterns.

See you then!