Today I’ll be continuing my series on DNS in Microsoft Azure. In my first post I covered fundamental concepts of DNS resolution in Azure such as the 168.63.129.16 virtual IP and Azure-provided DNS. In the second post I went over the Azure Private DNS service, it’s benefits, limitations, and available patterns when you use Azure Private DNS alone. In this post I’ll be exploring how, when combined with bring your own DNS (BYODNS), Azure Private DNS begins to really shine and introduces opportunities some very cool self-service/delegation models.
If an enterprise has any degree technical footprint, it will have a DNS infrastructure providing DNS resolution to intranet and Internet resources. These existing services are often very mature and deeply embedded into the technology stack. This means the likelihood of ditching your existing DNS service for a cloud-based DNS service isn’t going to happen out of the gates (if at all). This leaves you with the question of extending your existing DNS infrastructure into Azure as is, or hooking it into cloud native DNS services such as Azure Private DNS. I’m not going to give you the typical sales pitch stating how easy it is to do the latter, because it can be challenging depending on how complex your DNS infrastructure is and what your internal policies and operations models are. Instead I’m going to show you how you can make these two services coexist and compliment each other.
As I covered in my first post, you can configure the VMs to use either Azure DNS servers or your own DNS servers. This configuration is available at both the VNet level and VM network interface level. Avoid setting the DNS server settings directly on the VM’s network interface if possible because it will introduce more management overhead. There are always exceptions to the rule, but make sure establish what those exceptions are and have a way of tracking them.
So you’ve decided you’re going to BYODNS. Common reason for doing this are:
- Hybrid workloads that require access to on-premises services
- Advanced capabilities of existing DNS services
- Requirements for Windows Active Directory for centralized identity, authentication, and optionally configuration management services
- Maintaining a singular management plane for all DNS services across an organization
Since the requirement around Windows Active Directory services is the most common reason in my experiences, I’m going to cover that use case. Keep in mind that you could easily sub in your favorite DNS infrastructure service for the DNS patterns I demonstrate in this post. Yes, this means you could toss in a BIND server or InfoBlox NVA.
With that settled, let’s cover the basics.
In the BYODNS scenario, you’ll want to configure your own DNS servers as seen in the screenshot below (note that you should include at least two DNS servers for redundancy):

When configured to use a specific set of DNS servers, a few things happen at the VM. The screenshot below is the results of an ipconfig /all on a domain-joined Windows Server 2016 VM. First you’ll notice that the DNS server being pushed to the VM is the 10.100.4.10 address which is the DNS server setting I’m pushing at the VNet. The other thing to take note of is the Connection-specific DNS suffix being pushed by the Azure DHCP service is no longer the Azure-provided (xxx.xxx.internal.cloudapp.net). It’s now reddog.microsoft.com which is a non-functioning placeholder. This is pushed to avoid interfering with DNS resolution through BYODNS such as the domain-joined scenario I’m demonstrating.
The lab environment I’m using for this post looks like the below.

It has three VNets in a hub and spoke architecture where the shared VNet is peered to both the app1 and app2 VNet. The shared VNet contains a single VM named dc1 acting as a domain controller for a Windows Active Directory forest named journeyofthegeek.com. Each spoke VNet is configured to push the IP of dc1 (10.100.4.10) to the VMs within the VNet as the DNS server. The VMs in each spoke are domain-joined. I’ve also created multiple Azure Private Zones as seen in the table in the diagram. The shared VNet has been linked to all the zones for resolution. Each spoke VNet is linked to a zone for registration and resolution.
The DNS Server service running on dc1 has been configured to forward all traffic outside of its domain to Google’s public DNS servers . It also has multiple conditional forwarders configured to send traffic for any of the Azure Private DNS zones to the 168.63.129.16 virtual IP. I’ve created a single A record in the appzone.com named www and assigned it the IP of the app1 server (10.102.0.10) in the app1 VNet.
If you take a look below at each of the Azure Private DNS zones assigned to the spokes, you can see that the VMs in each spoke has automatically registered an A record for itself with its associated zone. Take note that this happened even though each VM is configured to use dc1 as a DNS server. This is the magic of the cloud platform where the platform itself took care of registration of the records.

app1zone.com Private DNS Zone

app2zone.com Private DNS Zone
When a VM needs to perform DNS resolution, it sends that DNS query to dc1. It then sends a DNS query to Azure DNS services via the 168.63.129.16 virtual IP for resolution of the Azure Private zones (red line) that it has been linked to. Resolution of records in other domains are sent out to the Internet (blue). The traffic flows is illustrated in the diagram below:

There are a few benefits to this pattern introduces. One benefit is it addresses a few of the gaps in Azure Private DNS, namely no conditional forwarding and no query logging.
With no support for conditional forwarding, any VMs you set to use the Azure DNS servers through the 168.63.129.16 virtual IP will only be able to resolve namespaces Azure DNS is aware of. Since Azure DNS has no awareness of DNS zones running on the domain controller, we’d be out of luck if we needed to use any domain services. This problem extends to any DNS zone you’re running on DNS equipment that isn’t resolvable from the Internet. Yep, this means no hybrid workloads over your private connection back to your on-prem or colo datacenter. The conditional forwarder capability on the BYODNS service allow us to resolve the problem and additionally get the queries to Azure DNS when it’s called for.
The other limitation is DNS query logging. As I’ve mentioned before, DNS query logs are excellent inputs to any organization’s behavior analytics to help detect threats in the environment. That log data is that much more important when you move into the cloud, because it helps mitigate the risks of the additional freedoms you’ll be giving application owners and developers to spin up their own resources. By introducing a BYODNS service, we capture that log data.
I fully expect both of these features to eventually make their way into the service. Until that time, the BYODNS pattern demonstrated above can help address the gaps.
You may be asking yourself, “If I have to BYODNS, what does Azure Private DNS get me?” Excellent question. The answer is it can provide self-service, agility, reduce overhead, and mitigate risk. How does it do these things, let me count the ways:
- In most organizations, DNS is managed by a central IT group. This means application owners and developers have to submit request and wait for those requests to be completed. Wouldn’t it be great to let them perform the updates themselves on a zone they own?
- Azure Private DNS is available over a modern REST API. Yes yes, I know you are a scripting ninja and have a 100 PowerShell and Bash scripts available at your fingertips, but show me a developer in 2019 who wants to write anything in those languages when a REST option is available.
- Managing multiple DNS zones and associated records on BYODNS equipment can require significant overhead in both staff and hardware. This sometimes drives organizations to support fewer zones which increases the risks of changes to the zone affecting applications. By incorporating Azure Private DNS into the mix, you can reduce the overhead of BYODNS (think of how much more when logging and conditional forwarders are introduced) by letting each business unit own a zone (i.e. marketing.journeyofthegeek.com, hr.journeyofthegeek.com, etc).
- Show me someone who been in operations that hasn’t had a major outage caused by what should have been a simple DNS change. No? I didn’t think so. By giving each BU its own Azure Private DNS zone, you limit the blast radius of a bad change to BU1 affecting BU2. Since each zone is different resource in Azure, you can additionally wrap an authorization boundary around that resource limiting employees to only the zones they need to administer.
Once you have the above pattern in place, you can easily expand upon it providing DNS resolution from on-premises VMs to Azure and vice versa. You can Setup the appropriate connection between Azure and your on-premises (S2S, ExpressRoute), put in the appropriate conditional forwarders on both ends, and you’re good to go! Again, expect this to be easier as the service matures if conditional forwarders and a PrivateLink endpoint for the service are introduced.
Well folks, that will wrap up the series. The key things I want you to take away from this is that Azure Private DNS isn’t in a state where it can replace a mature DNS implementation (I fully expect that to change over time). Instead, you will want to use to to supplement your existing DNS implementation to reduce overhead, increase agility of application owners and developers, and yes even mitigate a bit risk in the process.
For those of you who will be stuffing themselves with turkey, stuffing, and mashed potatoes this week, have a wonderful Thanksgiving!