Azure Files and AD DS – Part 2

Azure Files and AD DS – Part 2

Hi there and welcome to the second post in my series about Azure Files integration with AD DS. In the first post I gave an overview of the service, the value proposition, its current limitations, and described the lab I’ll be using for this post. For this post I’ll be walking through the setup, examining some packet captures and Fiddler captures, and touching on a few of the gotchas I ran into.

Before I jump into the technical gooey goodness, I’m going to cover some prerequisites.

Prerequisites

One obvious factoid is you’ll need a Windows AD domain up and running and the machine you connect to the share from will need to be joined to that domain. One disclaimer to keep in mind is you have a multiple Windows AD forest scenario, such as an account and resource forest, you’ll need to be aware of which domain you’re integrating the Azure File share with. If you integrate it with a resource forest but have your user accounts in the account forest, you’ll need to use name suffix routing. I’m not going to go into the details of name suffix routing, but if you’re curious you can read through this article. The short of it is the service principal name associated with the computer or service account used to represent the Azure Storage account the file share is created on uses the domain of files.core.windows.net. When performing the Kerberos authentication, the domain controller in the account forest wouldn’t know how to to direct the request to the resource forest because that domain will not be associated with your resource forest. For this lab I created a Windows AD domain with the namespace jogcloud.local

You will also need to ensure that you are synchronizing the users and groups from your Windows AD domain you want to be able to access the Azure File share to Azure AD. The tenant you synchronize to must be the same tenant the Azure subscription containing the Azure Storage account is associated with. Don’t worry about why right now, I’ll cover that later. For this lab I’ll be using my jogcloud.com tenant.

Logical Layout

To store those wonderful files you’ll need an Azure Storage account. The storage account should be created in the same region (or closest region if on-premises) to the clients that will access the file share. This will ensure optimal performance and avoid cross region costs if your clients are in Azure. You can use either a storage account with the standard GPv2 (General Purpose v2) SKU or Premium FileStorage SKU if you need better performance and scale. For this lab I’ll be using the GPv2 SKU.

Lastly, networking requirements. Like all Azure PaaS offerings, Azure Storage is by default available over the public Internet. Since no sane human being wants to send SMB traffic over the public Internet, you have the option of using a private endpoint. For this lab I’ll be going the private endpoint route.

So prerequisites are now set, let’s jump into the setup.

Integrating Azure Storage account with Windows AD

The first step in the process to get this integration working is to get the Azure Storage account you’ll be using setup with an identity in Windows AD. A kind human being over at Microsoft wrote a wonderful Azure PowerShell module that makes what I’m about to do a hundred times easier and is the recommended way to go about this. I’m not going to use it for this demonstration because I want to walk through each of the steps in the process to better your understanding of the magic within the module.

Before you run any commands you’ll need to ensure you have the PowerShell modules below installed. You can validate this by running Get-Module -ListAvailable to display the PowerShell modules installed on the machine.

Now we need to create the security principal that is going to represent the Azure Storage account in Windows AD. You have the option of either using a traditional service account (user account) or a computer account. As of August 28th 2020, there are some limitations you’ll run into if you use a service account over a computer account. My recommendation is use a computer account for now. I’ll cover the limitation later on in this entry. For the purposes of this blog post, I’ll be using a service account.

One important thing to note here is you need to treat this just like you would a traditional service account. By this I mean you will want to create the account with a non-expiring password and put in appropriate controls to perform a controlled rotation of the credential to avoid service disruption.

Here I’ve created a service account with the name azurestorage and have set it with a non-expiring password.

Service account for Azure Storage account

Next up I’m going create a SPN (service principal name) for the service account. The SPN is going to identify the Azure Storage account to Windows AD and instruct the user’s system which service it needs to obtain a Kerberos ticket for. The SPN is going to use the CIFS service class and include the FQDN of the Azure Files endpoint on your storage account. It will look like cifs/stjogfileshare.file.core.windows.net. You can register the SPN using the setspn -S <SPN> <ACCOUNT_NAME>. The -S switch will validate there the SPN is not already registered to another security principal in the domain.

Setting the SPN

So you have a service account and an SPN. Now you need to create a credential in Azure Storage and associate that credential with the service account. To create that credential you’ll need to hop over to PowerShell and connect to Azure. Once connected you’ll use the New-AzStorageAccountKey and Get-AzStorageAccountKey cmdlets to create and retrieve the storage account key used for the integration. It’s important to note that this key (named kerb1 or kerb2) is only used to setup this integration and can’t be used for any control or data plane operations against the storage account.

Configure the service account with this key as its password.

Creating the account key

The last piece in this step of the process is to enable the AD DS feature support for the storage account. To do this you’ll use the Set-AzStorageAccount cmdlet using the syntax below.

Set-AzStorageAccount syntax

All of the inputs between Name and ActiveDirectoryAzureStorageSid can be obtained by using the Get-ADDomain cmdlet as seen below.

Get-ADDomain Output

The ActiveDirectoryAzureStorageSid parameter can be obtain by using the Get-ADUser cmdlet as seen below.

Get-ADUser Output

Once you have the inputs, you’ll plug them in Set-AzStorageAccount cmdlet. If successful you’ll get a return similar to below.

Set-AzStorageAccount Output

If you’d like you can confirm the feature is enabled you can do that with the steps documented here. The AzFilesHybrid module I mentioned earlier also has some great debugging tools as outlined here.

Now that the integration is complete, I need to create a file share and configure authorization at the management plane. There are two separate layers of authorization occurring, one for access to the file share itself and the other for access to the files and folders within the file share. Access to the share itself is controlled by Azure RBAC and thus controlled by the Azure management plane. There are three roles built in roles provided that should service most use cases and these are:

  1. Storage File Data SMB Share Read which allows read access to the file share over SMB
  2. Storage File Data SMB Share Contributor which allows read, write, and delete access over the file share over SMB
  3. Storage File Data SMB Share Elevated Contributor which allows read, write, delete, and modify of Windows ACLs of the file share over SMB

You are free to design your own custom roles, but those three built in roles are pretty much spot on as to what you’d see in your typical Windows File share-level permissions.

The second layer of authorization is controlled by the Windows ACLs (access control lists) associated with the share, files, and folders. These are your classic Windows ACLs you know and love and will be enforced by the Windows OS.

Just like on a traditional Windows file share, the most restrictive of controls will apply. This means if you’ve only been granted the Storage File Data SMB Share Read role, it won’t matter if you have full permissions in the Windows ACLs, you will only be able to read and will not be able to write.

Let me demonstrate this.

Here I have assigned the Bob Gray user the Storage File Data SMB Share Reader role on the stjogfileshare storage account. Bob Gray is Domain administrator on the jogcloud.local Windows AD domain and is a local administrator on the member server.

Role Assignment on Storage Account

As seen below I’m able to successfully map the shared folder, but I’m unable to create folder on it because the management plane is restricting my access.

Unable to created a folder when holding Read RBAC role

Running the klist command on the machine shows I successfully obtained a Kerberos ticket for file share.

klist Output

The packet capture I ran when I mapped the share shows in the SMB conversation that the client and server are using the negotiate protocol (which includes the Kerberos protocol).

Session Negotiation

After the Kerberos ticket is obtained from the domain controller the client sets up the session with the storage account.

Session Setup

From this point forward, the encryption capabilities of SMB 3 are used to encrypt the session between the client and Azure storage account.

Remember the limitation around using a service account as the security principal representing Azure Files I mentioned earlier? Well that limitation is around the encryption algorithm used to encrypt the Kerberos tickets passed to Azure Files. Out of the gates, the Azure Files with AD DS feature only supported the RC4-HMAC encryption algorithm. This is a deprecated algorithm according to the IETF (Internet Engineering Task Force) and should not be used. Many organizations in regulated environments have disabled this encryption algorithm in Active Directory due to its deprecation.

As of August 28th, 2020, the integration now supports AES256 encryption for Kerberos. However, this is only supported if you’re using a Computer account. This means that if you are already using the service with a service account and you want to move to AES256, you’ll need to migrate to using a Computer account. I expect support for a service account will be added sometime in the future, so check the official documentation for updates.

If you try to use a service account and attempt to enforce the use of AES256, your connection will fail. If you do a packet capture you’ll see the Azure Storage account throw a KRB Error: KRB5RB_AP_ERR_MODIFIED indicating the Azure Storage account was unable to validate the ticket that was passed because it doesn’t support the encryption algorithm used to secure it.

I went through and created a group in Windows AD named engineering and added Bob Gray to it. I then removed the Storage File Data SMB Share Read role assignment for Bob Gray and created a role assignment for the engineering group for the Storage File Data SMB Share Contributor role. I’m now able to create files and folders on the share as seen below.

Creating folder on share

Bringing up the permissions on the folders you’ll observe that that are a few default permissions which come out of the box. You can modify these default permission if you’d like (for example by removing Authenticated Users Read/Modify which is overly permissive). You do this by mounting the share as a super user using the standard storage keys. The process is outlined here.

Default permissions

That is pretty much all there is to it to the technical configuration.

So when you use this service what are some of the best practices that I would recommend?

  1. Use a computer account if you require an encryption algorithm better than RC-HMAC. At this time, computer accounts are the only type of security principal which supports AES256 encryption.
  2. Ensure you rotate the password at whatever interval aligns with your organizational security policy and any laws and regulations you may be subject to.
  3. When you create your Azure RBAC role assignments, use synchronized groups vs synchronized users. You do this for the same reason you would on-premises, granting access per user is not scalable.
  4. Dedicate the storage account you use for the file share to only file shares. Storage accounts have fixed limits that are shared across blobs, queues, tables, and files. You don’t want to get into a situation where you have to share those limits.
  5. Do your research to determine if Premium FileStorage makes more sense than GPv2. It’s more costly but provides better performance and scale.
  6. Try to deploy one file share per storage account if possible to ensure you get the maximum IOPS available for that file share. You can certainly put multiple file shares in the same storage account, but they will share the total IOPS available for the storage account.
  7. Ensure you are replicating files to another storage account. Unlike blobs, you can’t read from the second region if you’re using a RA-GRS storage account. If you’re using the Premium Files SKU, the storage account will only support LRS and ZRS which makes this replication to a storage account in another region so important. You could use AzCopy, PowerShell, or Azure Data Factory.

That’s it folks! Hope this post helped you understand feature that much better.

Thanks and see you next post!

Azure Files and AD DS – Part 1

Azure Files and AD DS – Part 1

Welcome back folks.

I recently had a few customers reach out to me with questions around Azure Files integration with Windows Active Directory Domain Services (AD DS). Since I had never used it, I decided to build a small lab and test the functionality and better understand the service. In this series I’ll be walking through what the functionality provides, how I observed it working, and how to set it up.

In any enterprise you will have a number of Windows file shares hosting critical corporate data. If you’ve ever maintained or supported those file shares you’re quite familiar with the absolute sh*t show that occurs across an organization with the coveted H drive is no longer accessible. Maintaining a large cluster of Windows Servers backing corporate file shares can be significantly complex to support. You have your upgrades, patches, needs to scale, failures with DFS-R (distributed file system replication) or FRS-R (file replication service replication) for some of you more unfortunate souls. Wouldn’t it be wonderful if all that infrastructure could be abstracted and managed by someone else besides you? That is the major value proposition of Azure Files.

Azure Files is a PaaS (platform-as-a-service) offering provided by Microsoft Azure that is built on top of Azure Storage. It provides fully managed file shares over a protocol you know and love, SMB (Server Message Block). You simply create an Azure File share within an Azure Storage account and connect to the file share using SMB from a Windows, Linux, or MacOS machine.

Magic right? Well what about authentication and authorization? How does Microsoft validate that you are who you say you are and that you’re authorized to connect to the file share? That my friends will be what we cover from this point on.

Azure File shares supports methods of authentication:

  1. Storage account access keys
  2. Azure AD Domain Services
  3. AD DS (Active Directory Domain Services)

Of the three methods, I’m going cover authentication using AD DS (which I’ll refer to as Windows AD).

Support for Windows AD with Azure Files graduated to general availability last month. As much as we’d like it to not be true, Windows AD and traditional SMB file shares will be with us many years to come. This is especially true for enterprises taking the hybrid approach to cloud, which is a large majority of the customer base I work with. The Windows AD integration allows organizations to leverage their existing Windows AD identities for authentication and protect the files using Windows ACLs (access control lists) they’ve grown to love. This provides a number of benefits:

  • Single sign-on experience for users (via Kerberos)
  • Existing Windows ACLs can be preserved if moving files to a Azure File share integrated with Windows AD
  • Easier to supplement existing on-premises file servers without affecting the user experience
  • Better support for lift and shift workloads which may have dependencies on SMB
  • No infrastructure to manage
  • Support for Azure File Sync which allows you to store shares in Azure Files and create cache on Windows Servers

There are a few key dependencies and limitations I want to call out. Keep in mind you’ll want to visit the official documentation as these will change over time.

  • The Windows AD identities you want to use to access the file shares must be synchronized to Azure AD (we’ll cover the why later)
  • The storage account hosting the Azure File share must be in the same tenant you’re syncing the identities to
  • Linux VMs are not supported at this time
  • Services using the computer account will not be able to access an Azure File share so plan on using a traditional service account (aka User account) instead
  • Clients accessing the file share must be Window 7/Server 2008 R2 ore above
Lab Environment

So I’ve given you the marketing pitch, let’s take a look at the lab environment I’ll be using for this walkthrough.

For my lab I’ve provisioned three VMs (virtual machines) in a VNet (virtual network). I have a domain controller which provides the jogcloud.local Windows AD forest, an Azure AD Connect server which is synchronizing users to the jogcloud Azure AD tenant, and a member server which I’ll use to access the file share.

In addition to the virtual machines, I also have an Azure Storage account where I’ll create the shares. I’ve also configured a PrivateLink Endpoint which will allow me to access the file share without having to traverse the Internet. Lastly, I have an Azure Private DNS zone hosting the necessary DNS namespace needed to handle resolution to my Private Endpoint. I won’t be covering the inner workings of Azure Private DNS and Private Endpoints, but you can read my series on how those two features work together here.

In my next post I’ll dive in how to setup the integration, walk through some Wireshark and Fiddler captures, and walk through some of the challenges I ran into when running through this lab for this series.

See you next post!

AWS Managed Microsoft AD Deep Dive Part 7 – Trusts and Domain Controller Event Logs

AWS Managed Microsoft AD Deep Dive  Part 7 – Trusts and Domain Controller Event Logs

Welcome back fellow geek.  Today I’m continuing my deep dive series into AWS Managed Microsoft AD.  This will represent the seventh post in the series and I’ve covered some great content over the series including:

  1. An overview of the service
  2. How to setup the service
  3. The directory structure, pre-configured security principals, group policies and the delegated security model
  4. How to configure LDAPS and the requirements that pop up due to Amazon’s delegation model
  5. Security of the service including supported secure transport protocols, ciphers, and authentication protocols
  6. How do schema extensions work and what are the limitations

Today I’m going cover three additional capabilities of AWS Managed Microsoft AD which includes the creation of trusts, access to the Domain Controller event logs, and scalability.

I’ll first cover the capabilities around Active Directory trusts.  Providing this capability opens up the possibility a number of scenarios that aren’t possible in managed Windows Active Directory (Windows AD) services that don’t support trusts such as Microsoft’s Azure Active Directory Domain Services.  Some of the scenarios that pop up in my head are resource forest, trusts with trusted partners to maintain collaboration for legacy applications (applications dependent on legacy protocols such as Kerberos/NTLM/LDAP), trusts between development, QA, and production forests, and the usage of features features such as selective authentication to mitigate the risk to on-premises infrastructure.

For many organizations, modernization of an entire application catalog isn’t feasible but those organizations still want to take advantage of the cost and security benefits of cloud services.  This is where AWS Managed Microsoft AD can really shine.  It’s capability to support Active Directory forests trusts opens up the opportunity for those organizations to extend their identity boundary to the cloud while supporting legacy infrastructure.  Existing on-premises core infrastructure services such as PKI and SIEM can continue to be used and even extended to monitor the infrastructure using the managed Windows AD.

As you can see this is an extremely powerful capability and makes the service a good for almost every Windows AD scenario.  So that’s all well and good, but if you wanted marketing material you’d be reading the official documentation right?  You came here for the deep dive, so let’s get into it.

The first thing that popped into my mind was the question as to how Amazon would be providing this capability in a managed service model.  Creating a forest trust typically requires membership in privileged groups such as Enterprise Admins and Domain Admins, which obviously isn’t possible in a manged service.  I’m sure it’s possible to delegate the creation of Active Directory trusts and DNS conditional forwarders with modifications of directory permissions and possibly user rights, but there’s a better way.  What is this better way you may be asking yourself?  Perhaps serving it up via the Directory Services console in the same way schema modifications are served up?

Let’s walk through the process of setting up an Active Directory forest trust with a customer-managed traditional implementation of Windows Active Directory and an instance of AWS Managed Microsoft AD.  For this I’ll be leveraging my home Hyper-V lab.  I’m actually in the process of rebuilding it so there isn’t much there right now.  The home lab consists of two virtual machines, one named JOG-DC running Windows Server 2016 and functions as a domain controller (AD DS) and certificate authority (AD CS) for the journeyofthegeek.com Active Directory forest.  The other virtual machine is named named JOG-CLIENT, runs Windows 10, and is joined to the journeyofthegeek.com domain.  I’ve connected my VPC with my home lab using AWS’s Managed VPN to setup a site-to-site IPSec VPN connection with my local pfSense box.

7awsadds1.png

Prior to setting up the trusts there are a few preparatory steps that need to be completed.  The steps will be familiar to those of you who have established forests trusts across firewalled network segments.  At a high level, you’ll want to perform the following tasks:

  1. Ensure the appropriate ports are opened between the two forests.
  2. Ensure DNS resolution between the two forests is established

For the first step I played it lazy since this is is a temporary configuration (please don’t do this in production).   I allowed all traffic from the VPC address range to my lab environment by modifying the firewall rules on my pfSense box.  On the AWS side I needed to adjust the traffic rules for the security group SERVER01 is in as well as the security group for the managed domain controllers.

7awsadds2.png

To establish DNS resolution between the two forests I’ll be using conditional forwarders setup within each forest.  Setting the conditional forwarders up in the journeyofthegeek.com forest means I have to locate the IP addresses of the managed domain controllers in AWS.  There are a few ways you could do it, but I went to the AWS Directory Services Console and selected the geekintheweeds.com directory.

7awsadds3

On the Directory details section of the console the DNS addresses list the IP addresses the domain controllers are using.

7awsadds4.png

After creating the conditional forwarder in the DNS Management MMC in the journeyofthegeek.com forest, DNS resolution of a domain controller from geekintheweeds.com was successful.

7awsadds5.png

I next created the trust in the journeyofthegeek.com domain ensuring to select the option to create the trust in this domain only and recording the trust password using the Active Directory Domains and Trusts.  We can’t create the trusts in both domains since we don’t have an account with the appropriate privileges in the AWS managed domain.

Next up I bounced back over to the Directory Services console and selected the geekintheweeds.com directory.  From there I selected the Network & security tab to open the menu needed to create the trust.

7awsadds6.png

From here I clicked the Add trust relationship button which brings up the Add a trust relationship menu.  Here I filled in the name of the domain I want to establish the trust with, the trust password I setup in the journeyofthegeek.com domain, select a two-way trust, and add an IP that will be used within configuration of the conditional forwarder setup by the managed service.

7awsadds7.png

After clicking the Add button the status of the trust is updated to Creating.

7awsadds8.png

The process takes a few minutes after which the status reports as verified.

7awsadds9.png

Opening up the Active Directory Users and Computers (ADUC) MMC in the journeyofthegeek.com domain and selecting the geekintheweeds.com domain successfully displays the directory structure.  Trying the opposite in the geekintheweeds.com domain works correctly as well.  So our two-way trust has been created successfully.  We would now have the ability to setup any of the scenarios I talked about earlier in the post including a resource forest or leveraging the managed domain as a primary Windows AD service for on-premises infrastructure.

The second capability I want to briefly touch on is the ability to view the Security Event Log and DNS Server logs on the managed domain controllers.  Unlike Microsoft’s managed Windows AD service, Amazon provides ongoing access to the Security Event Log and DNS Server Log.  The logs can be viewed using the Event Log MMC from a domain-joined machine or programmatically with PowerShell.  The group policy assigned to the Domain Controllers OU enforces a maximum event log size of 256MB but Amazon also archives a year’s worth of logs which can be requested in the event of an incident.  The lack of this capability was a big sore spot for me when I looked at Azure Active Directory Domain Services.  It’s great to see Amazon has identified this critical use case.

Last but definitely not least, let’s quickly cover the scalability of the service.  Follow Microsoft best practices and you can take full advantage of scaling horizontally with the click of a single button.  Be aware that the service only scales horizontally and not vertically.  If you have applications that don’t follow best practices and point to specific domain controllers or perform extremely inefficient LDAP queries (yes I’m talking to you developers who perform searches using front and rear-facing wildcards and use LDAP_MATCHING_RULE_IN_CHAIN filters) horizontal scaling isn’t going to help you.

Well folks that rounds out this entry into the series.  As we saw in the post Amazon has added key capabilities that Microsoft’s managed service is missing right now.  This makes AWS Managed Microsoft AD the more versatile of the two services and more than likely a better fit in almost any scenario where there is a reliance on Windows AD.

In my final posts of the series I’ll provide a comparison chart showing the differing capabilities of both AWS and Microsoft’s services.

See you next post!

 

 

 

AWS Managed Microsoft AD Deep Dive Part 6 – Schema Modifications

AWS Managed Microsoft AD Deep Dive  Part 6 – Schema Modifications

Yes folks, we’re at the six post for the series on AWS Managed Microsoft AD (AWS Managed AD.  I’ve covered a lot of material over the series including an overview, how to setup the service, the directory structure, pre-configured security principals, group policies, and the delegated security model, how to configure LDAPS in the service and the implications of Amazon’s design, and just a few days ago looked at the configuration of the security of the service in regards to protocols and cipher suites.  As per usual, I’d highly suggest you take a read through the prior posts in the series before starting on this one.

Today I’m going to look the capabilities within the AWS Managed AD to handle Active Directory schema modifications.  If you’ve read my series on Microsoft’s Azure Active Directory Domain Services (AAD DS) you know that the service doesn’t support the schema modifications.  This makes Amazon’s service the better offering in an environment where schema modifications to the standard Windows AD schema are a requirement.  However, like many capabilities in a managed Windows Active Directory (Windows AD) service, limitations are introduced when compared to a customer-run Windows Active Directory infrastructure.

If you’ve administered an Active Directory environment in a complex enterprise (managing users, groups, and group policies doesn’t count) you’re familiar with the butterflies that accompany the mention of a schema change.  Modifying the schema of Active Directory is similar to modifying the DNA of a living being.  Sure, you might have wonderful intentions but you may just end up causing the zombie apocalypse.  Modifications typically mean lots of application testing of the schema changes in a lower environment and a well documented and disaster recovery plan (you really don’t want to try to recover from a failed schema change or have to back one out).

Given the above, you can see the logic of why a service provider providing a managed Windows AD service wouldn’t want to allow schema changes.  However, there very legitimate business justifications for expanding the schema (outside your standard AD/Exchange/Skype upgrades) such as applications that need to store additional data about a security principal or having a business process that would be better facilitated with some additional metadata attached to an employee’s AD user account.  This is the market share Amazon is looking to capture.

So how does Amazon provide for this capability in a managed Windows AD forest?  Amazon accomplishes it through a very intelligent method of performing such a critical activity.  It’s accomplished by submitting an LDIF through the AWS Directory Service console.  That’s right folks, you (and probably more so Amazon) doesn’t have to worry about you as the customer having to hold membership in a highly privileged group such as Schema Admins or absolutely butchering a schema change by modifying something you didn’t intend to modify.

Amazon describes three steps to modifying the schema:

  1. Create the LDIF file
  2. Import the LDIF file
  3. Verify the schema extension was successful

Let’s review each of the steps.

In the first step we have to create a LDAP Data Interchange Format (LDIF) file.  Think of the LDIF file as a set of instructions to the directory which in this could would be an add or modify to an object class or attribute.  I’ll be using a sample LDIF file I grabbed from an Oracle knowledge base article.  This schema file will add the attributes of unixUserName, unixGroupName, and unixNameIinfo to the default Active Directory schema.

To complete step one I dumped the contents below into an LDIF file and saved it as schemamod.ldif.

dn: CN=unixUserName, CN=Schema, CN=Configuration, DC=example, DC=com
changetype: add
attributeID: 1.3.6.1.4.1.42.2.27.5.1.60
attributeSyntax: 2.5.5.3
isSingleValued: TRUE
searchFlags: 1
lDAPDisplayName: unixUserName
adminDescription: This attribute contains the object's UNIX username
objectClass: attributeSchema
oMSyntax: 27

dn: CN=unixGroupName, CN=Schema, CN=Configuration, DC=example, DC=com
changetype: add
attributeID: 1.3.6.1.4.1.42.2.27.5.1.61
attributeSyntax: 2.5.5.3
isSingleValued: TRUE
searchFlags: 1
lDAPDisplayName: unixGroupName
adminDescription: This attribute contains the object's UNIX groupname
objectClass: attributeSchema
oMSyntax: 27

dn:
changetype: modify
add: schemaUpdateNow
schemaUpdateNow: 1
-

dn: CN=unixNameInfo, CN=Schema, CN=Configuration, DC=example, DC=com
changetype: add
governsID: 1.3.6.1.4.1.42.2.27.5.2.15
lDAPDisplayName: unixNameInfo
adminDescription: Auxiliary class to store UNIX name info in AD
mayContain: unixUserName
mayContain: unixGroupName
objectClass: classSchema
objectClassCategory: 3
subClassOf: top

For the step two I logged into the AWS Management Console and navigated to the Directory Service Console.  Here we can see my instance AWS Managed AD with the domain name of geekintheweeds.com.

6awsadds1.png

I then clicked hyperlink on my Directory ID which takes me into the console for the geekintheweeds.com instance.  Scrolling down shows a menu where a number of operations can be performed.  For the purposes of this blog post, we’re going to focus on the Maintenance menu item.  Here we the ability to leverage AWS Simple Notification Service (AWS SNS) to create notifications for directory changes such as health changes where a managed Domain Controller goes down.  The second section is a pretty neat feature where we can snapshot the Windows AD environment to create a point-in-time copy of the directory we can restore.  We’ll see this in action in a few minutes.  Lastly, we have the schema extensions section.

6awsadds2.png

Here I clicked the Upload and update schema button and entered selected the LDIF file and added a short description.  I then clicked the Update Schema button.

6awsadds3.png

If you know me you know I love to try to break stuff.  If you look closely at the LDIF contents I pasted above you’ll notice I didn’t update the file with my domain name.  Here the error in the LDIF has been detected and the schema modification was cancelled.

6awsadds4.png

I went through made the necessary modifications to the file and tried again.  The LDIF processes through and the console updates to show the schema change has been initialized.

6awsadds5.png

Hitting refresh on the browser window updates the status to show Creating Snapshot.  Yes folks Amazon has baked into the schema update process a snapshot of the directory provide a fallback mechanism in the event of your zombie apocalypse.  The snapshot creation process will take a while.

6awsadds6.png

While the snapshot process, let’s discuss what Amazon is doing behind the scenes to process the LDIF file.  We first saw that it performs some light validation on the LDIF file, it then takes a snapshot of the directory, then applies to the changes to a single domain controller by selecting one as the schema master, removing it from directory replication, and applying the LDIF file using the our favorite old school tool LDIFDE.EXE.  Lastly, the domain controller is added back into replication to replicate the changes to the other domain controller and complete the changes.  If you’ve been administering Windows AD you’ll know this has appeared recommended best practices for schema updates over the years.

Once the process is complete the console updates to show completion of the schema installation and the creation of the snapshot.

6awsadds7.png