DNS in Microsoft Azure – Part 3

DNS in Microsoft Azure – Part 3

Today I’ll be continuing my series on DNS in Microsoft Azure.  In my first post I covered fundamental concepts of DNS resolution in Azure such as the 168.63.129.16 virtual IP and Azure-provided DNS.  In the second post I went over the Azure Private DNS service, it’s benefits, limitations, and available patterns when you use Azure Private DNS alone.  In this post I’ll be exploring how, when combined with bring your own DNS (BYODNS), Azure Private DNS begins to really shine and introduces opportunities some very cool self-service/delegation models.

If an enterprise has any degree technical footprint, it will have a DNS infrastructure providing DNS resolution to intranet and Internet resources.  These existing services are often very mature and deeply embedded into the technology stack.  This means the likelihood of ditching your existing DNS service for a cloud-based DNS service isn’t going to happen out of the gates (if at all).  This leaves you with the question of extending your existing DNS infrastructure into Azure as is, or hooking it into cloud native DNS services such as Azure Private DNS.  I’m not going to give you the typical sales pitch stating how easy it is to do the latter, because it can be challenging depending on how complex your DNS infrastructure is and what your internal policies and operations models are.  Instead I’m going to show you how you can make these two services coexist and compliment each other.

As I covered in my first post, you can configure the VMs to use either Azure DNS servers or your own DNS servers.  This configuration is available at both the VNet level and VM network interface level.  Avoid setting the DNS server settings directly on the VM’s network interface if possible because it will introduce more management overhead.  There are always exceptions to the rule, but make sure establish what those exceptions are and have a way of tracking them.

So you’ve decided you’re going to BYODNS.  Common reason for doing this are:

  1. Hybrid workloads that require access to on-premises services
  2. Advanced capabilities of existing DNS services
  3. Requirements for Windows Active Directory for centralized identity, authentication, and optionally configuration management services
  4. Maintaining a singular management plane for all DNS services across an organization

Since the requirement around Windows Active Directory services is the most common reason in my experiences, I’m going to cover that use case.  Keep in mind that you could easily sub in your favorite DNS infrastructure service for the DNS patterns I demonstrate in this post.  Yes, this means you could toss in a BIND server or InfoBlox NVA.

With that settled, let’s cover the basics.

In the BYODNS scenario, you’ll want to configure your own DNS servers as seen in the screenshot below (note that you should include at least two DNS servers for redundancy):

dnservers.PNG

When configured to use a specific set of DNS servers, a few things happen at the VM.  The screenshot below is the results of an ipconfig /all on a domain-joined Windows Server 2016 VM.  First you’ll notice that the DNS server being pushed to the VM is the 10.100.4.10 address which is the DNS server setting I’m pushing at the VNet.  The other thing to take note of is the Connection-specific DNS suffix being pushed by the Azure DHCP service is no longer the Azure-provided (xxx.xxx.internal.cloudapp.net).  It’s now reddog.microsoft.com which is a non-functioning placeholder.  This is pushed to avoid interfering with DNS resolution through BYODNS such as the domain-joined scenario I’m demonstrating.ipconfig.png

The lab environment I’m using for this post looks like the below.

labenv.PNG

It has three VNets in a hub and spoke architecture where the shared VNet is peered to both the app1 and app2 VNet.  The shared VNet contains a single VM named dc1 acting as a domain controller for a Windows Active Directory forest named journeyofthegeek.com.  Each spoke VNet is configured to push the IP of dc1 (10.100.4.10) to the VMs within the VNet as the DNS server.  The VMs in each spoke are domain-joined.  I’ve also created multiple Azure Private Zones as seen in the table in the diagram.  The shared VNet has been linked to all the zones for resolution.  Each spoke VNet is linked to a zone for registration and resolution.

The DNS Server service running on dc1 has been configured to forward all traffic outside of its domain to Google’s public DNS servers .  It also has multiple conditional forwarders configured to send traffic for any of the Azure Private DNS zones to the 168.63.129.16 virtual IP.  I’ve created a single A record in the appzone.com named www and assigned it the IP of the app1 server (10.102.0.10) in the app1 VNet.

If you take a look below at each of the Azure Private DNS zones assigned to the spokes, you can see that the VMs in each spoke has automatically registered an A record for itself with its associated zone.  Take note that this happened even though each VM is configured to use dc1 as a DNS server.  This is the magic of the cloud platform where the platform itself took care of registration of the records.

app1zone

app1zone.com Private DNS Zone

app2zone

app2zone.com Private DNS Zone

When a VM needs to perform DNS resolution, it sends that DNS query to dc1.  It then sends a DNS query to Azure DNS services via the 168.63.129.16 virtual IP for resolution of the Azure Private zones (red line) that it has been linked to.  Resolution of records in other domains are sent out to the Internet (blue).  The traffic flows is illustrated in the diagram below:

stddnsreso.PNG

There are a few benefits to this pattern introduces.  One benefit is it addresses a few of the gaps in Azure Private DNS, namely no conditional forwarding and no query logging.

With no support for conditional forwarding, any VMs you set to use the Azure DNS servers through the 168.63.129.16 virtual IP will only be able to resolve namespaces Azure DNS is aware of.  Since Azure DNS has no awareness of DNS zones running on the domain controller, we’d be out of luck if we needed to use any domain services.  This problem extends to any DNS zone you’re running on DNS equipment that isn’t resolvable from the Internet.  Yep, this means no hybrid workloads over your private connection back to your on-prem or colo datacenter.  The conditional forwarder capability on the BYODNS service allow us to resolve the problem and additionally get the queries to Azure DNS when it’s called for.

The other limitation is DNS query logging.  As I’ve mentioned before, DNS query logs are excellent inputs to any organization’s behavior analytics to help detect threats in the environment.  That log data is that much more important when you move into the cloud, because it helps mitigate the risks of the additional freedoms you’ll be giving application owners and developers to spin up their own resources.  By introducing a BYODNS service, we capture that log data.

I fully expect both of these features to eventually make their way into the service.  Until that time, the BYODNS pattern demonstrated above can help address the gaps.

You may be asking yourself, “If I have to BYODNS, what does Azure Private DNS get me?” Excellent question.  The answer is it can provide self-service, agility, reduce overhead, and mitigate risk.  How does it do these things, let me count the ways:

  1. In most organizations, DNS is managed by a central IT group.  This means application owners and developers have to submit request and wait for those requests to be completed.  Wouldn’t it be great to let them perform the updates themselves on a zone they own?
  2. Azure Private DNS is available over a modern REST API.  Yes yes, I know you are a scripting ninja and have a 100 PowerShell and Bash scripts available at your fingertips, but show me a developer in 2019 who wants to write anything in those languages when a REST option is available.
  3. Managing multiple DNS zones and associated records on BYODNS equipment can require significant overhead in both staff and hardware.  This sometimes drives organizations to support fewer zones which increases the risks of changes to the zone affecting applications.  By incorporating Azure Private DNS into the mix, you can reduce the overhead of BYODNS (think of how much more when logging and conditional forwarders are introduced) by letting each business unit own a zone (i.e. marketing.journeyofthegeek.com, hr.journeyofthegeek.com, etc).
  4. Show me someone who been in operations that hasn’t had a major outage caused by what should have been a simple DNS change.  No?  I didn’t think so.  By giving each BU its own Azure Private DNS zone, you limit the blast radius of a bad change to BU1 affecting BU2.  Since each zone is different resource in Azure, you can additionally wrap an authorization boundary around that resource limiting employees to only the zones they need to administer.

Once you have the above pattern in place, you can easily expand upon it providing DNS resolution from on-premises VMs to Azure and vice versa.  You can Setup the appropriate connection between Azure and your on-premises (S2S, ExpressRoute), put in the appropriate conditional forwarders on both ends, and you’re good to go!  Again, expect this to be easier as the service matures if conditional forwarders and a PrivateLink endpoint for the service are introduced.

Well folks, that will wrap up the series.  The key things I want you to take away from this is that Azure Private DNS isn’t in a state where it can replace a mature DNS implementation (I fully expect that to change over time).  Instead, you will want to use to to supplement your existing DNS implementation to reduce overhead, increase agility of application owners and developers, and yes even mitigate a bit risk in the process.

For those of you who will be stuffing themselves with turkey, stuffing, and mashed potatoes this week, have a wonderful Thanksgiving!

 

DNS in Microsoft Azure – Part 1

DNS in Microsoft Azure – Part 1

Hi everyone,

In this series of posts I’m going to talk about a technology, that while old, still provides a critical foundational service.  Yes folks, we’re going to cover Domain Naming System (DNS).  Specifically, we’re going to look at the options for private DNS in Microsoft Azure and what the positives and negatives are of each pattern.  I’m going to go into this assuming you have a basic knowledge of DNS and understand the namespaces, various record types, forward and reverse lookup zones, recursive and iterative queries, DNS forwarding and conditional forwarding, and other core DNS concepts.  If any of those are unfamiliar to you, take some time to review the basics then come back to this post.

Before we jump into the DNS options in Azure, I first want to cover the 168.63.129.16 address.  If you’ve ever done anything even basic in Azure, you’ve probably run into this address or used it without knowing it.  This public IP address is owned by Microsoft and is presented as a virtual IP address serving as a communication channel to the host node  for a number of platform resources.  It provides functionality such as virtual machine (VM) agent communication of the VM’s ready state, health state, enables the VM to obtain an IP address via DHCP, and you guessed it, enables the VM to leverage Azure DNS services.  The address is static and is the same for any VNet you create in every Azure region.  Fun fact, some geolocation services will report this IP as being based out of Hong Kong and I’m sure you can imagine how that works when something like a WAF is in place with regional IP restrictions.  Fun times. 🙂

Traffic is routed to and from this virtual IP address through the subnet gateway.  If you run a route print on a Windows machine, you can see this route defined in the routing table of the VM.

route

Output of route print on Azure VM

The IP address is also defined in the VirtualNetwork service tag meaning the default rules within a network security group (NSG) allow this traffic to and from the VM.  Given the criticality of the functions the IP plays, Microsoft recommends you allow inbound and outbound communication with it (it’s a requirement for using any of the Azure DNS services we’ll discuss in these posts).

Now that you understand what the 168.63.129.16 virtual IP address is, let’s first cover the very basics of DNS in Azure. You can configure Azure’s DHCP service to push a custom set of DNS servers to Azure VMs or optionally leave the default which is for VMs to use Azure’s DNS services (through the 168.63.129.16 virtual IP address).  This can be configured at the VNet level and then inherited by all virtual network interfaces (VNIs) associated with the VNet, or optionally configured directly on the VNI associated with the VM.

Configure DNS on VNet

Configure DNS on VNet

This brings us to the first option for DNS resolution in Azure, Azure-provided name resolution.  Each time you spin up a virtual network Azure assigns it a unique private DNS namespace using the format <randomly generated>.internal.cloudapp.net.  This namespace is pushed to the machine via DHCP Option 15 thus each VM has an fully qualified domain name of <vm_host_name>.<randomly generated>.internal.cloudapp.net and each VM in the VNet can resolve IP addresses of one another.

Let’s look at an example with a single VNet.  I’ve created a single VNet named vnet1.  I’ve assigned the CIDR block of 10.101.0.0/16 and created a single subnet assigned the 10.101.0.0/24 block.  Two Windows Server 2016 VMs have been created named azuredns and azuredns1 with the IP addresses 10.101.0.4 and 10.101.0.5.  Azure has assigned the a namespace of r0b5mqxog0hu5nbrf150v3iuuh.bx.internal.cloudapp.net to the VNet.  Notes the DHCP Server and DNS Server settings in the ipconfig output of the azuredns vm shown below.

ipconfig

IPConfig output of Azure VM

If we ping azuredns1 from azuredns we can see the in below Wireshark capture that prior to executing the ping, azuredns performs a DNS query to the 168.63.129.16 VIP and gets back a query response with the IP address of azuredns1.

wireshark

Wireshark packet capture of DNS query

The resolution process is very simple as seen in the diagram below.

simple_reso

DNS Resolution within single VNet

Well that’s all well and good for very basic DNS resolution, but who the heck has a single VNet in anything but a test environment?  So can we expand Azure-provided DNS to multiple VNets?  The answer is yes, but it’s ugly.  Recall that each VNet has its own private DNS namespace.  The only way to resolve names contained within that namespace is for a VM in that VNet to send the query to the 168.63.129.16 address.  Yes folks, this means you would need to drop a DNS server in each VNet in order to resolve the Azure-provided DNS host names assigned to VMs within that VNet by another VMs in another VNet as illustrated in the diagram below.

multi_vnet_reso

Multiple VNet resolution

You can see as the number of VNets increases the scalability of this solution quickly breaks down.  Take note that if you wanted to resolve these host names from on-premises you could use a similar conditional forwarder pattern.

Let’s sum up the positives and negatives of Azure-provided DNS.

  • Positives
    • No need to provision your own DNS servers and worry about high availability or scalability
    • DNS service provided by Azure automatically scales
    • VMs within a VNet can resolve each other’s IP addresses out of the box
  • Negatives
    • Solution doesn’t scale with multiple VNets
    • You’re stuck with the namespace assigned to the VNet
    • WINS and NetBIOS are not supported
    • Only A records that are automatically registered by the service are supported (no manual registration of records)
    • No reverse DNS support
    • No query logging

As you can see from the above the negatives far outweigh the positives.  Personally, I see Azure-provided DNS only being useful for bare bones test environments with a single VNet.  If anyone has any other scenarios where it comes in handy, I’d love to hear them.

In the next post l cover Azure’s new offering in the DNS space, Azure Private DNS Zones.  I’ll walk through how it works and how we can combine it with BYO DNS to create some pretty neat patterns.

See you then!