AWS Managed Microsoft AD Deep Dive Part 6 – Schema Modifications

AWS Managed Microsoft AD Deep Dive  Part 6 – Schema Modifications

Yes folks, we’re at the six post for the series on AWS Managed Microsoft AD (AWS Managed AD.  I’ve covered a lot of material over the series including an overview, how to setup the service, the directory structure, pre-configured security principals, group policies, and the delegated security model, how to configure LDAPS in the service and the implications of Amazon’s design, and just a few days ago looked at the configuration of the security of the service in regards to protocols and cipher suites.  As per usual, I’d highly suggest you take a read through the prior posts in the series before starting on this one.

Today I’m going to look the capabilities within the AWS Managed AD to handle Active Directory schema modifications.  If you’ve read my series on Microsoft’s Azure Active Directory Domain Services (AAD DS) you know that the service doesn’t support the schema modifications.  This makes Amazon’s service the better offering in an environment where schema modifications to the standard Windows AD schema are a requirement.  However, like many capabilities in a managed Windows Active Directory (Windows AD) service, limitations are introduced when compared to a customer-run Windows Active Directory infrastructure.

If you’ve administered an Active Directory environment in a complex enterprise (managing users, groups, and group policies doesn’t count) you’re familiar with the butterflies that accompany the mention of a schema change.  Modifying the schema of Active Directory is similar to modifying the DNA of a living being.  Sure, you might have wonderful intentions but you may just end up causing the zombie apocalypse.  Modifications typically mean lots of application testing of the schema changes in a lower environment and a well documented and disaster recovery plan (you really don’t want to try to recover from a failed schema change or have to back one out).

Given the above, you can see the logic of why a service provider providing a managed Windows AD service wouldn’t want to allow schema changes.  However, there very legitimate business justifications for expanding the schema (outside your standard AD/Exchange/Skype upgrades) such as applications that need to store additional data about a security principal or having a business process that would be better facilitated with some additional metadata attached to an employee’s AD user account.  This is the market share Amazon is looking to capture.

So how does Amazon provide for this capability in a managed Windows AD forest?  Amazon accomplishes it through a very intelligent method of performing such a critical activity.  It’s accomplished by submitting an LDIF through the AWS Directory Service console.  That’s right folks, you (and probably more so Amazon) doesn’t have to worry about you as the customer having to hold membership in a highly privileged group such as Schema Admins or absolutely butchering a schema change by modifying something you didn’t intend to modify.

Amazon describes three steps to modifying the schema:

  1. Create the LDIF file
  2. Import the LDIF file
  3. Verify the schema extension was successful

Let’s review each of the steps.

In the first step we have to create a LDAP Data Interchange Format (LDIF) file.  Think of the LDIF file as a set of instructions to the directory which in this could would be an add or modify to an object class or attribute.  I’ll be using a sample LDIF file I grabbed from an Oracle knowledge base article.  This schema file will add the attributes of unixUserName, unixGroupName, and unixNameIinfo to the default Active Directory schema.

To complete step one I dumped the contents below into an LDIF file and saved it as schemamod.ldif.

dn: CN=unixUserName, CN=Schema, CN=Configuration, DC=example, DC=com
changetype: add
attributeID: 1.3.6.1.4.1.42.2.27.5.1.60
attributeSyntax: 2.5.5.3
isSingleValued: TRUE
searchFlags: 1
lDAPDisplayName: unixUserName
adminDescription: This attribute contains the object's UNIX username
objectClass: attributeSchema
oMSyntax: 27

dn: CN=unixGroupName, CN=Schema, CN=Configuration, DC=example, DC=com
changetype: add
attributeID: 1.3.6.1.4.1.42.2.27.5.1.61
attributeSyntax: 2.5.5.3
isSingleValued: TRUE
searchFlags: 1
lDAPDisplayName: unixGroupName
adminDescription: This attribute contains the object's UNIX groupname
objectClass: attributeSchema
oMSyntax: 27

dn:
changetype: modify
add: schemaUpdateNow
schemaUpdateNow: 1
-

dn: CN=unixNameInfo, CN=Schema, CN=Configuration, DC=example, DC=com
changetype: add
governsID: 1.3.6.1.4.1.42.2.27.5.2.15
lDAPDisplayName: unixNameInfo
adminDescription: Auxiliary class to store UNIX name info in AD
mayContain: unixUserName
mayContain: unixGroupName
objectClass: classSchema
objectClassCategory: 3
subClassOf: top

For the step two I logged into the AWS Management Console and navigated to the Directory Service Console.  Here we can see my instance AWS Managed AD with the domain name of geekintheweeds.com.

6awsadds1.png

I then clicked hyperlink on my Directory ID which takes me into the console for the geekintheweeds.com instance.  Scrolling down shows a menu where a number of operations can be performed.  For the purposes of this blog post, we’re going to focus on the Maintenance menu item.  Here we the ability to leverage AWS Simple Notification Service (AWS SNS) to create notifications for directory changes such as health changes where a managed Domain Controller goes down.  The second section is a pretty neat feature where we can snapshot the Windows AD environment to create a point-in-time copy of the directory we can restore.  We’ll see this in action in a few minutes.  Lastly, we have the schema extensions section.

6awsadds2.png

Here I clicked the Upload and update schema button and entered selected the LDIF file and added a short description.  I then clicked the Update Schema button.

6awsadds3.png

If you know me you know I love to try to break stuff.  If you look closely at the LDIF contents I pasted above you’ll notice I didn’t update the file with my domain name.  Here the error in the LDIF has been detected and the schema modification was cancelled.

6awsadds4.png

I went through made the necessary modifications to the file and tried again.  The LDIF processes through and the console updates to show the schema change has been initialized.

6awsadds5.png

Hitting refresh on the browser window updates the status to show Creating Snapshot.  Yes folks Amazon has baked into the schema update process a snapshot of the directory provide a fallback mechanism in the event of your zombie apocalypse.  The snapshot creation process will take a while.

6awsadds6.png

While the snapshot process, let’s discuss what Amazon is doing behind the scenes to process the LDIF file.  We first saw that it performs some light validation on the LDIF file, it then takes a snapshot of the directory, then applies to the changes to a single domain controller by selecting one as the schema master, removing it from directory replication, and applying the LDIF file using the our favorite old school tool LDIFDE.EXE.  Lastly, the domain controller is added back into replication to replicate the changes to the other domain controller and complete the changes.  If you’ve been administering Windows AD you’ll know this has appeared recommended best practices for schema updates over the years.

Once the process is complete the console updates to show completion of the schema installation and the creation of the snapshot.

6awsadds7.png

 

AWS Managed Microsoft AD Deep Dive Part 5 – Security

AWS Managed Microsoft AD Deep Dive  Part 5 – Security

You didn’t think I was done with AWS Managed Microsoft AD yet did you?  In this post I’m going to perform some tests to evaluate the protocols and ciphers suites available for LDAPS as well as checking out the managed Domain Controllers support for NTLMv1 and the cipher suites supported for Kerberos.  I’ll be using the same testing mechanisms I used when for my series on Microsoft Azure Active Directory Domain Services.

For those of you who are new to the series, I’ve been performing a deep dive review of AWS Managed Microsoft AD which is Amazon’s answer to a managed Windows Active Directory service.  In the first post I provided a high level overview of the service, in the second post I covered the setup of the service, the third post reviewed the directory structure, pre-configured security principals and group policies, and the delegated security model, and in the fourth entry I delved into how Amazon has managed to delegate configuration of LDAPS and the requirements that pop up due to their design choices.  I highly recommend you review those posts as well as my series on Microsoft Azure AD Domain Services if you’d like to compare the two services.

I’ve made a modification to my lab and have added another server named SERVER02 which will be running Linux.  The updated Visio looks like this.

labpart5

Server01 has been configured with the Windows Remote Server Administration Tools (RSAT) for Active Directory as well as holding the Active Directory Certificate Services (AD CS) role and being configured as a root Enterprise CA.  I’ve also done all the necessary configuration to distribute the certificates to the managed domain controllers and have successfully tested LDAPS.  Server02 will be used to test SSLv3 and NTLM.  I’ve modified the instance to use the domain controllers as DNS servers by overriding DHCP settings as outlined in this article.

The first thing I’m going to do is test to see if SSLv3 has been disabled on the managed domain controllers.  Recall that the managed Domain Controllers are running Windows Server 2012 R2 which has SSLv3 enabled by default.  It can be disabled by modifying the registry as documented here.  Believe it or not you can connect to the managed domain controllers registry via a remote registry connection.  Checking the registry location shows that the SSLv3 node hasn’t been created which is indicative of SSLv3 still being enabled.

5awsadds1.png

To be sure I checked it using the same method that I used in my Azure AD Domain Services post which is essentially compiling another version of openssh that supports SSLv3.  After the customized version was installed and I queried the Domain Controller over port 636 which you can see in the screenshot below that SSLv3 is still enabled.  Suffice to say this surprised me considering what I had seen so far in regards to the security of the service.  This will be a show stopper for some organizations in adopting the service especially since it isn’t configurable by the customer that I observed.

5awsadds2.png

So SSLv3 is enabled and presents a risk.  Have the cipher suites been hardened?  For this I’ll again use a tool developed by Thomas Pornin.   The options I’m using perform an exhaustive search across the typically offered cipher suites, space the connections out by 1 second, and start with a minimum of sslv3.

5awsadds3.png

The results are what I expected and mimic the results I saw when testing Azure AD Domain Services, minus the support for SSLv3 which Microsoft has disabled in their managed offering.  The supported cipher suites look to be the out of the box defaults for Server 2012 R2 and include RC4 and 3DES which are ciphers with known vulnerabilities.  The inability to disallow the ciphers might again be a show stopper for organizations with strict security requirements.

The Kerberos protocol is a critical component of Windows Active Directory providing the glue to hold the service together including (but in no way exhaustive) being behind the users authentication to a domain-joined machine, the single sign-on experience, and the ability to form trusts with other forests.  Given the importance of the protocol, it’s important to ensure its backed by strong ciphers.  The ciphers supported by a Windows Active Directory are configurable and can be checked by looking at the msDS-SupportedEncryptionTypes attribute of a domain controller object.

I next pulled up a domain controller object in ADUC and reviewed the attribute.  The attribute on the managed domain controllers has a value of 28, which is the default for Windows Server 2012 R2.  The value translates to support of the following cipher suites:

  • RC4_HMAC_MD5
  • AES128_CTS_HMAC_SHA1
  • AES256_CTS_HMAC_SHA1_96

These are the same cipher suites supported by Microsoft’s Azure AD Domain Services service.  In this case both vendors have left the configuration to the defaults.

Lastly, to emulate my testing Azure AD Domain Services, I tested support for NTLMv1.  By default Windows Server 2012 R2 supports NTLMv1 due to requirements for backwards compatibility. Microsoft has long recommended disabling NTLMv1 due to the documented issues with the security of the protocol. Sadly there are a large number of applications and devices in use in enterprises which still require NTLMv1.

To test the AWS managed domain controllers I’m going to use Samba’s smbclient package on SERVER02.  I’ll use the client to connect to the domain controller’s share from SERVER02 using NTLM.  I first installed the smbclient package by running:

yum install samba-client.

The client enforces the use NTLMV2 in smbclient by default so I needed to make some modifications to the global section of the smb.conf file by adding client ntlmv2 auth = no. This option disables NTLMv2 on smbclient and will force it to use NTLMv1.

5awsadds4.png

In order to see whether or not the client was using NTLMv1 when connecting to the domain controllers, I started a packet capture using tcpdump before initiating a connection with the smbclient.

5awsadds6.png

I then transferred the packet capture over to my Windows box with WinSCP, opened the capture with WireShark, and navigated to the packet containing the Session Setup Request.  In the parsed capture we don’t see an NTLMv2 Response which means NTLMv1 was used to authenticate to the domain controller indicating NTLMv1 is supported by the managed domain controllers.

 

5awsadds5

 

So what can we take from the findings of this analysis?

  1. Amazon has left the secure transport protocols to the defaults which means SSLv3 is supported.
  2. Amazon has left the cipher suites to the defaults which means both RC4 and 3DES cipher suites are supported for both LDAPS and Kerberos.

I’d really like to see Amazon address the support for SSLv3 as soon as possible.  There is no reason I can see why that shouldn’t be shut off by default.  Similar to my requests to Microsoft, I’d like to see Amazon allow the supported cipher suites to be configurable via the AWS Management Console.  These two changes would save organizations with strict security requirements, such as those in the public sector, to utilize the services without introducing significant risk (and audit headaches).

In my next post I’ll demonstrate how the service can be leveraged to provide Windows Active Directory service to on-premises machines or machines in another public cloud as well as exploring how to create a forest trust with the service.

See you next post!

 

AWS Managed Microsoft AD Deep Dive Part 4 – Configuring LDAPS

AWS Managed Microsoft AD Deep Dive  Part 4 – Configuring LDAPS

I’m back again with another entry in my deep dive into AWS Managed Microsoft Active Directory (AD).  So far I’ve provided an overview of the service, covered how to configure the service, and analyzed the Active Directory default configuration such as the directory structure, security principals, password policies, and group policy setup by Amazon for new instances.  In this post I’m going to look at the setup of LDAPS and how Amazon supports configuration of it in the delegated model they’ve setup for the service.

Those of you that have supported a Windows AD environment will be quite familiar with the wonders and sometimes pain of the Lightweight Directory Access Protocol (LDAP).  Prior to the modern directories such as AWS Cloud Directory, Azure Active Directory the LDAP protocol served critical roles by providing both authentication and a method of which to work with data stored in directory data stores such as Windows AD.  For better or worse the protocol is still relevant today when working with Windows AD for both of the above capabilities (less for authentication these days if you stay away from backwards-thinking vendors).  LDAP servers listen on port 389 and 636 with 389 maintaining traffic in the clear (although there are exceptions where data is encrypted in transit such as Microsoft’s usage of Kerberos encryption or the use of StartTLS (credit to my friend Chris Jasset for catching my omission of StartTLS)) and 636 (LDAPS) providing encryption in transit via an SSL tunnel (hopefully not anymore) or a TLS tunnel.

Windows AD maintains that pattern and serves up the content of its data store over LDAP over ports 389 and 636 and additionally ports 3268 and 3269 for global catalog queries.  In the assume breach days we’re living in, we as security professionals want to protect our data as it flows over the network which means we’ll more often than not (exceptions are again Kerberos encryption usage mentioned above) be using LDAPS over ports 636 or 3269.  To provide that secure tunnel the domain controllers will need to be setup with a digital certificate issued by a trusted certificate authority (CA).    Domain Controllers have unique requirements for the certificates they use.  If you’re using  Active Directory Certificate Services (AD CS) Microsoft takes care of providing the certificate template for you.

So how do you provision a certificate to a Domain Controller’s certificate store when you don’t have administrative privileges such as the case for a managed service like AWS Managed Active Directory?   For Microsoft Azure Active Directory Domain Services (AAD DS) the public certificate and private key are uploaded via a web page in the Azure Portal which is a solid way of doing it.  Amazon went in a different and instead takes advantage of certificate autoenrollment.  If you’re not familiar with autoenrollment take a read through this blog.  In short, it’s an automated way to distribute certificates and eliminate some of the overheard of manually going through the typical certificate lifecycle which may contain manual steps.

If we bounce back to the member server in my managed domain, open the Group Policy Management Console (GPMC), and navigate to the settings tab of the AWS Managed Active Directory Policy we see that autoenrollment has been enabled on the domain controllers.  This setting explains why Amazon requires a member server joined to the managed domain be configured running AD CS.  Once the AD CS instance is setup, the CA has been configured either to as a root or subordinate CA, and a proper template is enabled for autoenrollment, the domain controllers will request the appropriate certificate and will begin using it for LDAPS.

4awsadds1.png

If you’ve ever worked with AD CS you may be asking yourself how you’ll be able to install AD CS in a domain where you aren’t a domain administrator when the Microsoft documentation specifically states you need to be a member of the Enterprise Admins and root domains Domain Admins group.  Well folks that is where the AWS Delegated Enterprise Certificate Authority Administrators group comes into play.  Amazon has modified the forest to delegate the appropriate permissions to install AD CS in a domain environment.  If we navigate to the CN=Public Key Services, CN=Services, CN=Configuration using ADSIEdit and view the Security for the container we see this group has been granted full permissions over the node allowing the necessary objects to be populated underneath it.

4awsadds2.png

I found it interesting that in the instructions provided by Amazon for enabling LDAPS the instructions state the Domain Controller certificate template needs to modified to remove the Client Authentication EKU.  I’d be interested in knowing the reason for modifying the Domain Controller certificate.  If I had to guess it’s to prevent the domain controller from using the certificate outside of LDAPS such as for mutual authentication during smart card logon.  Notice that from this article domain controllers only require the Server Authentication EKU when a certificate is only used to secure LDAPS.

I’ve gone ahead and installed AD CS on SERVER01 as an Enterprise root CA and thanks to the delegation model, the CA is provisioned with all the necessary goodness in CN=Public Key Services.  I also created the new certificate template following the instructions from Amazon.  The last step is to configure the traffic flow such that the managed domain controllers can contact the CA to request a certificate.  The Amazon instructions actually have a typo in them.  On Step 4 it instructs you to modify the security group for your CA and to create a new inbound rule allowing all traffic from the source of your CA’s AWS Security group.  The correct security group is actually the security group automatically configured by Amazon that is associated with the managed Active Directory instance.

At this point you’ll need to wait a few hours for the managed domain controllers to detect the new certificates available for autoenrollment.  Mine actually only took about an hour to roll the certificates out.

4awsadds3.png

To test the service I opened LDP.EXE and established a secure session over port 636 and all worked as expected.

4awsadds4.png

Since I’m a bit OCD I also pulled the certificate using openssl to validate it’s been issued by my CA.  As seen in the screenshot below the certificate was issued by the geekintheweeds-CA which is the CA I setup earlier.

4awsadds5.png

Beyond the instructions Amazon provides, you’ll also want to give some thought as to how you’re going to handle revocation checks. Keep in mind that by default AD CS stores revocation information in AD. If you have applications configured to check for revocation remember to ensure those apps can communicate with the domain controllers over port 389 so design your security groups with this in mind.

Well folks that will wrap up this post. Now that LDAPS is configured, I’ll begin the tests looking at the protocols and ciphers supported when accessing LDAPS as well as examining the versions of NTLM supported and the encryption algorithms supported with Kerberos.

See you next post!

 

AWS Managed Microsoft AD Deep Dive Part 3 – Active Directory Configuration

AWS Managed Microsoft AD Deep Dive  Part 3 – Active Directory Configuration

Welcome back to my series on AWS Managed Microsoft Active Directory (AD).  In my first post I provided an overview of the service.  In the second post I covered the setup of an AWS Managed Microsoft AD directory instance and demoed the seamless domain-join of a Windows EC2 instance.  I’d recommend you reference back to those posts before you jump into this.  I’ll also be referencing my series on Azure Active Directory Domain Serices (AAD DS), so you may want to take a read through that as well with the understanding it was written in February of 2018 and there may be new features in public preview.

For this post I’m going to cover the directory structure, security principals, group policy objects, and permissions which are created when a new instance of the managed service is spun up.  I’ll be using a combination of PowerShell and the Active Directory-related tools from the Remote Server Administrator Tools.  For those of you who like to dig into the weeds, this will be a post for you.  Shall we dive in?

Let’s start with the basics.  Opening the Active Directory Users and Computers (ADUC) Microsoft Management console (MMC) as the “Admin” account I created during setup displays the directory structure.  Notice that there are three organizational units (OU) that have been created by Amazon.

2awsadds1

The AWS Delegated Groups OU contains the groups Amazon has configured that have been granted delegated rights to perform typical administrative activities in the directory.  A number of the activities would have required Domain Admin or Enterprise Admin by default which obviously isn’t an option within a managed service where you want to limit the customer from blowing up AD.  Notice that Amazon has properly scoped each of the groups to allow for potential management from another trusted domain.

2awsadds2.png

The group names speak for themselves but there are a few I want to call out.  The AWS Delegated Administrators group is the most privileged customer group within the service and has been nested into all of the groups except for the AWS Delegated Add Workstations To Domain Users, which makes sense since the AWS Delegated Administrators group has full control over the customer OU as we will see soon.

2awsadds3.png

The AWS Delegated Kerberos Delegation Administrators group allows members to configure account-based Kerberos delegation.  Yes, yes, I know Resource-Based Constrained Delegation is far superior but there may be use cases where the Kerberos library being used doesn’t support it.  Take note that Azure Active Directory Domain Services (AAD DS) doesn’t support account-based Kerberos Constrained Delegation.  This a win for Amazon in regards to flexibility.

Another group which popped out to me was AWS Delegated Sites and Services.  Members of this group allow you to rename the Default-First-Site.  You would do this if you wanted it to match a site within your existing on-premises Windows AD to shave off a few seconds of user authentication by skipping the site discovery process.

The AWS Delegated System Management Administrators grants members full control over the domainFQDN\System\System Management container.  Creation of data in the container is a requirement for applications like Microsoft SCOM and Microsoft SCCM.

There is also the AWS Delegated User Principal Name Suffix Administrators which grants members the ability to create explicit user principal names.  This could pop up as a requirement for something like synchronize to Office 365 where your domain DNS name isn’t publicly routable and you have to go the alternate UPN direction.  Additionally we have the AWS Delegated Enterprise Certificate Authority Administrators which allows for deployment of a Microsoft CA via Active Directory Certificate Services (AD CS) by granting full control over CN=Public Key Services is the Configuration partition.  We’ll see why AD CS is important for AWS Managed Microsoft AD later in the series.  I like the AWS Delegated Deleted Object Lifetime Administrators which grants members the ability to set the lifetime for objects in the Active Directory Recycle Bin.

The next OU we have is the AWS Reserved OU.  As you can imagine, this is Amazon’s support staff’s OU.  Within it we have the built-in Administrator.  Unfortunately Amazon made the same mistake Microsoft did with this account by making it a god account with membership in Domain Admins, Enterprise Admins, and Schema Admins. With the amount of orchestration going into the solution I’d have liked to see those roles either broken up into multiple accounts or no account holding standing membership into such privileged roles via privileged access management system or red forest design.   The AWS Application and Service Delegated Group has a cryptic description (along with typos). I poked around the permissions and see it has write to the ServicePrincipalName attribute of Computer objects within the OU.  Maybe this comes into play with WorkDocs or WorkMail integration?  Lastly we have the AWSAdministrators which has been granted membership into the AWS Delegated Administrators group granting it all the privileges the customer administrator account has.  Best guess is this group is used by Amazon for supporting the customer’s environment.

2awsadds4

The last OU we’ll look at is the customer OU which takes on the NetBIOS name of the domain.  This is where the model for this service is similar to the model for AAD DS in that the customer has full control over an OU(s).  There are two OUs created within the OU named Computers and Users.  Amazon has setup the Computers OU and the User OU as the default locations for newly domain-joined computer objects and new user objects.   The only object that is pre-created in these OUs is the customer Admin account which is stored in the Users OU.  Under this OU you are free to do whatever needs doing.  It’s a similar approach as Microsoft took with AAD DS but contained one OU deep vs allowing for creation of new OUs at the base like in AAD DS.

2awsadds5.png

Quickly looking at Sites and Subnets displays a single default site (which can be renamed as I mentioned earlier).  Amazon has defined the entirety of the available private IP address space to account for any network whether it be on-prem or within a VPC.

2awsadds6.png

As for the domain controllers, both domain controllers are running Windows Server 2012 at the forest and domain functional levels of 2012 R2.

2awsadds7.png

Shifting over group policy, Amazon has made some interesting choices and has taken extra steps to further secure the managed domain controllers.  As you can see from the screenshot below, there four custom group policy objects (GPOs) created by default by Amazon.  Before we get into them, let’s first cover the Default Domain Policy (DDP) and Default Domain Controllers Policy (DDCP) GPO.  If you look at the image below, you’ll notice that while the DDCP exists it isn’t linked to the Domain Controllers OU.  This is an interesting choice, and not one that I would have made, but I’d be very curious as to their reasoning for their decision to remove the link.  Best practice would have been to leave it linked but create additional GPOs would override the settings in it with your more/less restrictive settings.  The additional GPOs would be set with a lower link order which would give them precedence over the DDCP.  At least they’re not modifying default GPOs, so that’s something. 🙂

Next up we have the DDP which is straight out of the box minus one change to the setting Network Security: Force logoff when logon hours expire.  By default this setting disabled and Amazon has enabled it to improve security.

The ServerAdmins GPO at the top of the domain has one setting enabled which adds the AWS Delegated Server Administrators to the BUILTIN\Administrators group on all domain-joined machine.  This setting is worth paying attention because it explains the blue icon next to the AWS Reserved OU.  Inheritance has been blocked on that OU probably to avoid the settings in the ServerAdmin GPO being applied to any Computer objects created within it.  The Default Domain Policy has then been directly linked to the OU.

Next up we have the other GPO linked to the AWS Reserved OU named AWS Reserved Policy:User.  The policy itself has a few User-related settings intended to harden the experience for AWS support staff including turning on screensavers and password protecting them and preventing sharing of files within profiles.  Nothing too crazy.

Moving on to the Domain Controllers OU we see that the two policies linked are AWS Managed Active Directory Policy and TimePolicyPDC.  The TimePolicyPDC GPO simply settings the authoritative the NTP settings on the domain controllers such as configuring the DCs to use Amazon NTP servers.  The AWS Managed Active Directory Policy is an interesting one.  It contains all of the policies you’d expect out of the Default Domain Controllers Policy GPO (which makes sense since it isn’t linked) in addition to a bunch of settings hardening the system.  I compared many of the settings to the STIG for Windows Server 2012 / 2012 R2 Domain Controllers and it very closely matches.  If I had to guess that is what Amazon is working with on a baseline which might make sense since Amazon touts the managed service as PCI and HIPAA compliant with a few minor changes on the customer end for password length/history and account lockout.  We’ll cover how those changes are possible in a few minutes.

Compare this to Microsoft’s AAD DS which is straight up defaults with no ability to further harden.  Now I have no idea if that’s in the roadmap for Microsoft or they’re hardening the system in another manner, but I imagine seeing those GPOs that are enforcing required settings will make your auditors that much happier.  Another +1 for Amazon.

2awsadds8.png

So how would a customer harden a password policy or configure account lockout?  If you recall from my blog on AAD DS, the password policy was a nightmare.  There was a zero character required password length making complexity dictate your length (3 characters woohoo!).  If you’re like me the thought of administrators having the ability to set three character passwords on a service that can be exposed to the Internet via their LDAPS Internet endpoint (Did I mention that is a terrible design choice) you have a recipe for disaster.  There was also no way to setup fine grained password policies to correct this poor design decision.

Amazon has taken a much saner and more security sensitive approach.  Recall from earlier there was a group named AWS Delegated Fine Grained Password Policy Administrators.  Yes folks, in addition to Amazon keeping the Default Domain Policy the out of the box defaults (better than nothing), Amazon also gives you the ability to customize five pre-configured fine grained password policies.  With these policies you can set the password and lockout settings that are required by your organization.  A big +1 for Amazon here.

2awsadds9

That wraps up this post.  As you can see from this entry Amazon has done a stellar job building security into this managed service, allowing some flexibility for the customer to further harden the systems, all the while still being successful in delegating commonly performed administrative activities back to the customer.

In my next post I’ll walk through the process to configure LDAPS for the service.  It’s a bit funky, so it’s worth an entry.  Once LDAPS is up and running, I’ll follow it up by examining the protocols and cipher suites supported by the managed service.

See you next post!

AWS Managed Microsoft AD Deep Dive Part 2 – Setup

AWS Managed Microsoft AD Deep Dive  Part 2 – Setup

Today I’ll continue my deep dive into AWS Managed Microsoft AD.  In the last blog post I provided an overview of the reasons an organization would want to explore a managed service for Windows Active Directory (Windows AD).  In this post I’ll be providing an overview of my lab environment and demoing how to setup an instance of AWS Managed Microsoft AD and seamlessly joining a Windows EC2 instance.

Let’s dive right into it.

Let’s first cover what I’ll be using as a lab.  Here I’ve setup a virtual private cloud (VPC) with default tenancy which is a requirement to use AWS Managed Microsoft AD.  The VPC has four subnets configured within it named intranet1, intranet2, dmz1, and dmz2.  The subnets intranet1/dmz1 and intranet2/dmz2 provide us with our minimum of two availability zones, which is another requirement of the service.  I’ve created a route table that routes traffic destined for IP ranges outside the VPC to an Internet Gateway and applied that route table to both the intranet1 and intranet2 subnets.  This will allow me to RDP to the EC2 instances I create.  Later in the series I’ll configure VPN connectivity with my on-premises lab to demonstrate how the managed AD can be used on-prem.  Below is a simple Visio diagraming the lab.

1awsadds1.png

To create a new instance of AWS Managed Microsoft AD, I’ll be using the AWS Management Console.  After successfully logging in, I navigate to the Services menu and select the Directory Service link under the Security, Identity & Compliance section as seen below.

1awsadds2.png

The Directory Service page then loads which is a launching pad for configuration of the gamut of AWS Directory Services including AWS Cloud Directory, Simple AD, AD Connector, Amazon Cognito, and of course AWS Managed Microsoft AD.  Any directory instance that you’ve created would appear in the listing to the right.  To create a new instance I select the Set up Directory button.

1awsadds3.png

The Set up a directory page loads and I’m presented with the options to create an instance of AWS Managed Microsoft AD, Simple AD, AD Connector, or an Amazon Cognito User Pool.  Before I continue, I’ll provide the quick and dirty on the latter three options.  Simple AD is actually Samba made to emulate some of the capabilities of Windows Active Directory.  The AD Connector acts as a sort of proxy to interact with an existing Windows Active Directory.  I plan on a future blog series on that one.  Amazon Cognito is Amazon’s modern authentication solution (looks great for B2C)  providing Open ID Connect, OAuth 2.0, and SAML services to applications.  That one will warrant a future blog series as well.  For this series we’ll be select the AWS Managed Microsoft AD option and clicking the Next button.

1awsadds4.png

A new page loads where we configure the directory information.  Here I’m given the option to choose between a standard or enterprise offering of the service.  Beyond storage I’ve been unable to find or pull any specifications of the EC2 instances Amazon is managing in the background for the domain controllers.  I have to imagine Enterprise means more than just 16GB of storage and would include additional memory and CPU.  For the purposes of this series, I’ll be selecting Standard Edition.

Next I’ll provide the key configuration details for forest which includes the fully qualified domain name (FQDN) for the forest I want created as well as optionally specifying the NetBIOS name.  The Admin password set here is used for the delegated administrator account Amazon creates for the customer.  Make sure this password is securely stored, because if it’s lost Amazon has no way of recovering it.

1awsadds5.png

After clicking the Next button I’m prompted to select the virtual private cloud (VPC) I want to service deployed to.  The VPC used must include at least two subnets that are in different availability zones.  I’ll be using the intranet1 and intranet2 subnets shown in my lab diagram earlier in the post.

1awsadds6.png

The next page that loads provides the details of the instance that will be provisioned.  Once I’m satisfied the configuration is correct I select the Create Directory button to spin up the service.

1awsadds7.png

Amazon states it takes around 20 minutes or so to spin up the instance, but my experience was more like 30-45 minutes.  The main Directories Services page displays the status of the directory as Creating.  As part of this creation a new Security Group will be created which acts as a firewall for the managed domain controllers.  Unlike some organization that try to put firewalls between domain-join clients and domain controllers, Amazon has included all the necessary flows and saves  you a ton of troubleshooting with packet captures.

1awsadds8

One of the neat features offered with this service is the ability to seamlessly domain-join Windows EC2 instances during creation.  Before that feature can be leveraged an AWS Identity and Access Management (IAM) role needs to setup that has the AmazonEC2RoleforSSM attached to it.  AWS IAM is by far my favorite feature of AWS.  At a very high level, you can think of AWS IAM as being the identity service for the management of AWS resources.  It’s insanely innovative and flexible in its ability to integrate with modern authentication solutions and in how granular you can be in defining rights and permissions to AWS resources.  I could do multiple series just covering the basics (which I plan to do in the future) but to progress this entry let me briefly explain AWS IAM Roles.  Think of an AWS IAM Role as a unique security principal similar to a user but without any credentials. The role is assigned a set of rights and permissions which AWS refers to as a policy.  The role is then assumed by a human (such as federated user) or non-human (such as EC2 instance) granting the entity the rights and permissions defined in the policy attached to the role.  In this scenario the EC2 instance I create will be assuming the AmazonEC2RoleforSSM.  This role grants a number of rights and permissions within AWS’s Simple System Manager (SSM), which for your Microsoft-heavy users is a scaled down SCCM.  It requires this role to orchestrate the domain-join upon instance creation.

To create the role I’ll open back up the Services menu and select IAM from the Security, Identity & Compliance menu.

1awsadds9.png

The IAM dashboard will load which provides details as to the number of users, groups, policies, roles, and identity providers I’ve created.  From the left-hand menu I’ll select the Roles link.

1awsadds10.png

The Role page then loads and displays the Roles configured for my AWS account. Here I’ll select the Create Role button to start the role creation process.

1awsadds11.png

The Create Role page loads and prompts me to select a trusted entity type.  I’ll be using this role for EC2 instances so I’ll select the AWS service option and chose EC2 as the service that will use the role.  Once both options are selects I select the Next: Permission button.

1awsadds12.png

Next up we need to assign a policy to the role.  We can either create a new policy or select an existing one.  For seamless domain-join with AWS Managed Microsoft AD, EC2 instances must use the AmazonEC2forSSM policy.  After selecting the policy I select the Next: Review button.

1awsadds13.png

On the last page I’ll name the role, set a description, and select the Create role button. The role is then provisioned and available for use.

1awsadds14.png

Navigating back to the Directory Services page, I can see that the geekintheweeds.com instance is up and running. This means we can now create some EC2 instances and seamlessly join them to the domain.

1awsadds15.png

The EC2 instance creation is documented endless on the web, so I won’t waste time walking through it beyond showing the screenshot below which displays the options for seamless domain-join. The EC2 instance created will be named SERVER01.

1awsadds16.png

After a few minutes the instance is ready to go. I start the Remote Desktop on my client machine and attempt a connection to the EC2 instance using the Admin user and credentials I set for the AD domain.

1awsadds17.png

Low and behold I’m logged into the EC2 instance using my domain credentials!

1awsadds18.png

As you can see setup of the service and EC2 instances is extremely simple and could made that much more simple if we tossed out the GUI and leveraged cloud formation templates to seamlessly spin up entire environments at a push of a button.

We covered a lot of content in this entry so I’ll close out here.  In the next entry I’ll examine the directory structure Amazon creates including the security principals and key permissions.

See you next post!

 

AWS Managed Microsoft AD Deep Dive Part 1 – Overview

AWS Managed Microsoft AD Deep Dive  Part 1 – Overview

Welcome back my fellow geeks!

Earlier this year I did a deep dive into Microsoft’s managed Active Directory service, Microsoft Azure Active Directory Domain Services (AAD DS).  I found was a service in its infancy and showing some promise, but very far from being an enterprise-ready service.  I thought it would be fun to look at Amazon’s (which I’ll refer to as Amazon Web Services (AWS) for the rest of the entries in this series) take on a managed Microsoft Active Directory (or as Microsoft is referring to it these days Windows Active Directory).

Unless your organization popped up in the last year or two and went the whole serverless route you are still managing operating systems that require centralized authentication, authorization, and configuration management.  You also more than likely have a ton of legacy/classic on-premises applications that require legacy protocols such as Kerberos and LDAP.  Your organization is likely using Windows Active Directory (Windows AD) to provide these capabilities along with Windows AD’s basic domain name system (DNS) service and centralized identity data store.

It’s unrealistic to assume you’re going to shed all those legacy applications prior to beginning your journey into the public cloud.  I mean heck, shedding the ownership of data centers alone can be a huge cost driver.  Organizations are then faced with the challenge of how to do Windows AD in the public cloud.  Is it best to extend an existing on-premises forest into the public cloud?  What about creating a resource forest with a trust?  Or maybe even a completely new forest with no trust?  Each of these options have positives and negatives that need to be evaluated against organizational requirements across the business, technical, and legal arenas.

Whatever choice you make, it means additional infrastructure in the form of more domain controllers.  Anyone who has managed Windows AD in an enterprise knows how much overhead managing domain controllers can introduce.  Let me clarify that by managing Windows AD, it does not mean opening Active Directory Users and Computers (ADUC) and creating user accounts and groups.  I’m talking about examining performance monitor AD counters and LDAP Debug logs to properly size domain controllers, configuring security controls to comply with PCI and HIPAA requirements or aligning with DISA STIGS, managing updates and patches, and troubleshooting the challenges those bring which requires extensive knowledge of how Active Directory works.  In this day an age IT staff need to be less focused on overhead such as this and more focused on working closely with its business units to drive and execute upon business strategy.  That folks is where managed services shine.

AWS offers an extensive catalog of managed services and Windows AD is no exception.  Included within the AWS Directory Services offerings there is a powerful offering named Amazon Web Services Directory Service for Microsoft Active Directory, or more succinctly AWS Managed Microsoft AD.  It provides all the wonderful capabilities of Windows AD without all of the operational overhead.  An interesting fact is that the service has been around since December 2015 in comparison to Microsoft’s AAD DS which only went into public preview at in 3rd Q 2017.  This head start has done AWS a lot of favors and in this engineer’s opinion, has established AWS Managed Microsoft AD as the superior managed Windows AD service over Microsoft’s AAD DS.  We’ll see why as the series progresses.

Over the course of this series I’ll be performing a similar analysis as I did in my series on Microsoft AAD DS.  I’ll also be examining the many additional capabilities AWS Managed Microsoft AD provides and demoing some of them in action.  My goal is that by the end of this series you understand the technical limitations that come with the significant business benefits of leveraging a managed service.

See you next post!

Azure AD Password Protection – Hybrid Deep Dive

Azure AD Password Protection – Hybrid Deep Dive

Welcome back fellow geeks.  Today I’m going to be looking at a brand new capability Microsoft announced entered public preview this week.  With the introduction of Hybrid Azure Active Directory Password Protection Microsoft continues to extend the protection it has based into its Identity-as-a-Service (IDaaS) offering Azure Active  Directory (AAD).

If you’ve administered Windows Active Directory (AD) in an environment with a high security posture, you’re very familiar with the challenges of ensuring the use of “good” passwords.  In the on-premises world we’ve typically used the classic Password Policies that come out of the box with AD which provide the bare minimum.  Some of you may even have leveraged third-party password filters to restrict the usage of commonly used passwords such as the classic “P@$$w0rd”.  While the third-party add-ins filled a gap they also introduce additional operational complexity (Ever tried to troubleshoot a misbehaving password filter?  Not fun) and compatibility issues.  Additionally the filters that block “bad” passwords tend to use a static data set or a data set that has to be manually updated and distributed.

In comes Microsoft’s Hybrid Azure Active Directory Password Protection to save the day.  Here we have a solution that comes directly from the vendor (no more third-party nightmares) that uses the power of telemetry and security data collected from Microsoft’s cloud to block the use of some of the most commonly used passwords (extending that even further with the use of fuzzy logic) as well as custom passwords you can provide to the service yourself.  In a refreshing turn of events, Microsoft has finally stepped back from the insanity (yes I’m sorry it’s insanity for most organizations) of requiring Internet access on your domain controllers.

After I finished reading the announcement this week I was immediately interested in taking a peek behind the curtains on how the solution worked.  Custom password filters have been around for a long time, so I’m not going to focus on that piece of the solution.  Instead I’m going to look more closely at two areas, deployment and operation of the solution.  Since I hate re-creating existing documentation (and let’s face it, I’m not going to do it nearly as well as those who do it for a living) I’ll be referencing back to Microsoft documentation heavily during this post so get your dual monitors powered up.

I’ll be using my Geek In The Weeds tenant for this demonstration.  The tenant is synchronized and federated with AAD with Azure Active Directory Connect (AADC) and Active Directory Federation Services (AD FS).  The tenant is equipped with some Office 365 E5 and Enterprise Mobility+Security E5 licenses.  Since I’ll need some Windows Servers for this, I’ll be using the lab seen in the diagram below.

1aadpp1.png

The first thing I needed to do was verify that my AAD tenant was configured for Azure Active Directory Password Protection.  For that I logged into the portal as a global administrator and selected the Azure Active Directory blade.  Scrolling down to the Security section of the menu shows an option named Authentication Methods.

1aadpp2.png

After selecting the option a new blade opens with only one menu item, Password Protection.  Since it’s the option there, it opens right up.  Here we can see the configuration options available for Azure Active Directory Password Protection and Smart Lockout.  Smart Lockout is at this time a feature specific to AAD so I’m not going to cover it.  You can read more about that feature in the Microsoft announcement.  The three options that we’re interested in for this post are within the Custom Banned Passwords and Password protection for Windows Server Active Directory.

1aadpp3.png

The custom banned passwords section allows organizations to add additional blocked passwords beyond the ones Microsoft provides.  This is helpful if organizations have a good grasp on their user’s behavior and have some common words they want to block to keep users from creating passwords using those words.  Right now it’s limited to 1000 words with one word per line.  You can copy and paste from a another document as long as what you paste is a single word per line.

I’d like to see Microsoft lift the cap of 1000 words as well as allowing for programmatic updating of this list.  I can see some real cool opportunities if this is combined with telemetry obtained from on-premises.  Let’s say the organization has some publicly facing endpoints that use a username and password for authentication.  That organization could capture the passwords used during password spray and brute force attacks, record the number of instances of their use, and add them to this list as the number of instances of those passwords reach certain thresholds.  Yes, I’m aware Microsoft is doing practically the same thing (but better) in Azure AD, but not everything within an organization uses Azure AD for authentication.  I’d like to see Microsoft allow for programmatic updates to this list to allow for such a use case.

Let’s enter two terms in the custom banned password list for this demonstration.  Let’s use geekintheweeds and journeyofthegeek.  We’ll do some testing later to see if the fuzzy matching capabilities extend to the custom banned list.

Next up we configuration options have Password protection for Windows Server Active Directory.  This is will be my focus.  Notice that the Enable password protection on Windows Server Active Directory option is set to Yes by default.  This option is going to control whether or not I can register the Azure AD Password Protection proxy service to Azure AD as you’ll see later in the post.  For now let’s set to that to No because it’s always interesting to see how things fail.

I’m going to leave Mode option at Audit for now.  This is Microsoft’s recommendation out of the gates.  It will give you time to get a handle on user behavior to determine how disruptive this will be to the user experience, give you an idea as to how big of a security issue this is for your organization, as well as giving you an idea as to the scope of communication and education you’ll need to do within your organization.

1aadpp4

There are two components we’ll need to install within on-premises infrastructure.  On the domain controllers we’ll be installing the Azure AD Password Protection DC Agent Service and the DC Agent Password Filter dynamic-link library (DLL).  On the member server we’ll be installing the Azure AD Password Protection Proxy Service.  The Microsoft documentation explains what these two services do at a high level.  In short, the DC Agent Password Filter functions like any other password filter and captures the clear text password as it is being changed.  It sends the password to the DC Agent Service which validates the password according to the locally cached copy of password policy that it has gotten from Azure AD.  The DC Agent Service also makes requests for new copies of the password policy by sending the request to the Proxy Service running on the member server which reaches out to Azure AD on the Agent Service’s behalf.  The new policies are stored in the SYSVOL folder so all domain controllers have access to them.  I sourced this diagram directly from Microsoft, so full credit goes to the product team for producing a wonderful visual representation of the process.

1aadpp5

The necessary installation files are sourced from the Microsoft Download Center.  After downloading the two files I distributed the DC Agent to my sole domain controller and the Proxy Service file to the member server.

Per Microsoft instructions we’ll be installing the Proxy Service first.  I’d recommend installing multiple instances of the Proxy Service in a production environment to provide for failover.  During the public preview stage you can deploy a maximum of two proxy agents.

The agent installation could be pushed by your favorite management tool if you so choose.  For the purposes of the blog I’ll be installing it manually.  Double-clicking the MSI file initiates the installation as seen below.

1aadpp6.png

The installation takes under a minute and then we receive confirmation the installation was successful.

1aadpp7.png

Opening up the Services Microsoft Management Console (MMC) shows the new service having been registered and that it is running. The service runs as Local System.

1aadpp8.png

Before I proceed further with the installation I’m going to startup Fiddler under the Local System security context using PSEXEC.  For that we open an elevated command prompt and run the command below.  The -s parameter opens the application under the LOCAL SYSTEM user context and the -i parameter makes the window interactive.

1aadpp9.png

Additionally we’ll setup another instance of Fiddler that will run under the user’s security context that will be performing the PowerShell cmdlets below.  When running multiple instances of Fiddler different ports needs to be used so agree to the default port suggested by Fiddler and proceed.

Now we need to configure the agent.  To do that we’ll use the PowerShell module that is installed when the proxy agent is installed.  We’ll use a cmdlet from the module to register the proxy with Azure Active Directory.  We’ll need a non-MFA enforced (public preview doesn’t support MFA-enforced global admins for registration) global admin account for this.  The account running the command also needs to be a domain administrator in the Active Directory domain (we’ll see why in a few minutes).

The cmdlet successfully runs.  This tells us the Enable password protection on Windows Server Active Directory option doesn’t prevent registration of the proxy service.   If we bounce back to the Fiddler capture we can see a few different web transactions.

1aadpp10

First we see a non-authenticated HTTP GET sent to https://enterpriseregistration.windows.net/geekintheweeds.com/discover?api-version=1.6.  For those of you familiar with device registration, this endpoint will be familiar.  The endpoint returns a JSON response with a variety of endpoint information.  The data we care about is seen in the screenshot below.  I’m not going to bother hiding any of it since it’s a publicly accessible endpoint.

1aadpp11.png

Breaking this down we can see a security principal identifier, a resource identifier indicating the device registration service, and a service endpoint which indicates the Azure Active Directory Password Protection service.  What this tells us is Microsoft is piggybacking off the existing Azure Active Directory Device Registration Service for onboarding of the proxy agents.

Next up an authenticated HTTP POST is made to https://enterpriseregistration.windows.net/aadpasswordpolicy/<tenantID>/proxy?api-version=1.0.  The bearer token for the global admin is used to authenticate to the endpoint.  Here we have the Proxy Service posting a certificate signing request (CSR) and providing its fully qualified domain name (FQDN).  The request for a CSR tells us the machine must have provisioned a new private/public key pair and once this transaction is complete we should have a signed certificate identifying the proxy.

1aadpp12

The endpoint responds with a JSON response.

1aadpp13.png

If we open up and base64 decode the value in the SignedProxyCertificateChain we see another signed JSON response. Decoding the response and dropping it into Visual Studio shows us three attributes of note, TenantID, CreationTime, and the CertificateChain.

1aadpp14.png

Dropping the value of the CertificateChain attribute into Notepad and saving it as a certificate yields the result below. Note the alphanumeric string after the AzureADBPLRootPolicyCert in the issued to section below.

1aadpp15.png

My first inclination after receiving the certificate was to look into the machine certificate stores. I did that and they were empty. After a few minutes of confusion I remembered the documentation stating the registration of the proxy is a onetime activity and that it was mentioned it requires domain admin in the forest root domain and a quick blurb about a service connection point (SCP) and that it needed to be done once for a forest. That was indication enough for me to pop open ADSIEDIT and check out the Configuration directory partition. Sure enough we see that a new container has been added to the CN=Services container named Azure AD Password Protection.

1aadpp16.png

Within the container there is a container named Forest Certs and a service connection point named Proxy Presence. At this point the Forest Certs container is empty and the object itself doesn’t have any interesting attributes set. The Proxy Presence service connection point equally doesn’t have any interesting attributes set beyond the keywords attribute which is set to an alphanumeric string of 636652933655882150_5EFEAA87-0B7C-44E9-B25C-4F665F2E0807. Notice the bolded part of the string has the same pattern as the what was in the certificate included in the CertificateChain attribute. I tried deleting the Azure AD Password Protection container and re-registering to see if these two strings would match, but they didn’t. So I’m not sure what the purpose of that string is yet, just that it probably has some relationship to the certificate referenced above.

The next step in the Proxy Service configuration process is to run the Register-AzureADPasswordProtectionForest cmdlet. This cmdlet again requires the Azure identity being used is a member of the global admins role and that the security principal running the cmdlet has membership in the domain administrators group. The cmdlet takes a few seconds to run and completes successfully.

Opening up Fiddler shows additional conversation with Azure AD.

1aadpp17.png

Session 12 is the same unauthenticated HTTP GET to the discovery endpoint that we saw above.  Session 13 is another authenticated HTTP POST using the global admin’s bearer token to same endpoint we saw after running the last cmdlet.  What differs is the information posted to the endpoint.  Here we see another CSR being posted as well as providing the DNS name, however the attributes are now named ForestCertificateCSR and ForestFQDN.

1aadpp18.png

The endpoint again returns a certificate chain but instead using the attribute SignedForestCertificateChain.

1aadpp19.png

The contents of the attribute look very similar to what we saw during the last cmdlet.

1aadpp20.png

Grabbing the certificate out of the CertificateChain attribute, pasting it into Notepad, and saving as a certificate yields a similar certificate.

1aadpp21.png

Bouncing back to ADSIEDIT and refreshing the view I saw that the Proxy Presence SCP didn’t change.  We do see a new SCP was created under the Forest Certs container.  Opening up the SCP we have a keywords attribute of {DC7F004B-6D59-46BD-81D3-BFAC1AB75DDB}.  I’m not sure what the purpose of that is yet.  The other attribute we now have set is the msDS-Settings attribute.

1aadpp22.png

Editing the msDS-Settings attribute within the GUI shows that it has no values which obviously isn’t true.  A quick Google search on the attribute shows it’s up to the object to store what it wants in there.

1aadpp23.png

Because I’m nosey I wanted to see the entirety of the attribute so in comes PowerShell.  Using a simple Get-ADObject query I dumped the contents of the attribute to a text file.

1aadpp26.png

The result is a 21,000+ character string.  We’ll come back to that later.

At this point I was convinced there was something I was missing.  I started up WireShark, put a filter on to capture LDAP and LDAPS traffic and I restarted the proxy service.  LDAP traffic was there alright over port 389 but it was encrypted via Kerberos (Microsoft’s typical habit).  This meant a packet capture wouldn’t give me what I wanted so I needed to be a bit more creative.  To get around the encryption I needed to capture the LDAP queries on the domain controller as they were processed.  To do that I used a script. The script is quite simple in that it enables LDAP debug logging for a specific period of time with settings that capture every query made to the device.  It then parses the event log entries created in the Directory Services Event Log and creates a pipe-delimited file.

1aadpp25

The query highlighted in red is what caught my eye.  Here we can see the service for performing an LDAP query against Active Directory for any objects one level under the GIWSERVER5 computer object and requesting the properties of objectClass, msds-settings, and keywords attributes.  Let’s replicate that query in PowerShell and see what the results look like.

1aadpp26.png

The results, which are too lengthy to paste here are there the computer object has two service connection point objects.  Here is a screenshot from the Active Directory Users and Computers MMC that makes it a bit easier to see.

1aadpp27.png

In the keywords attribute we have a value of {EBEFB703-6113-413D-9167-9F8DD4D24468};Domain=geekintheweeds.com.  Again, I’m not sure what the purpose of the keyword attribute value is.  The msDS-Settings value is again far too large to paste.  However, when I dump the value into the TextWizard in Fiddler and base64 decode it, and dump it into Visual Studio I have a pretty signed JSON web token.

1aadpp28.png

If we grab the value in the x509 Certificate (x5c) header and save it to a certificate, we see it’s the signed using the same certificate we received when we registered the proxy using the PowerShell cmdlets mentioned earlier.

1aadpp29.png

Based upon what I’ve found in the directory, at this point I’m fairly confident the private key for the public private key pair isn’t saved within the directory.  So my next step was to poke around the proxy agent installation directory of

C:\Program Files\Azure AD Password Protection Proxy\.  I went directly to the logs directory and saw the following logs.

1aadpp30.png

Opening up the most recent RegisterProxy log shows a line towards the bottom which was of interest. We can see that the encrypted proxy cert is saved to a folder on the computer running the proxy agent.

1aadpp31.png

Opening the \Data directory shows the following three ppfxe files. I’ve never come across a ppfxe file extension before so I didn’t have a way of even attempting to open it. A Google search on the file extension comes up with nothing. I can only some it is some type of modified PFX file.

1aadpp32.png

Did you notice the RegisterForest log file in the screenshot above? I was curious on that one so I popped it open. Here were the lines that caught my eye.

1aadpp33.png

Here we can see the certificate requested during the Register-AzureADPasswordProtectForest cmdlet had the private key merged back into the certificate, then it was serialized to JSON, encoded in UTF8, encrypted, base64 encoded, and written to the directory to the msDS-Settings attribute.  That jives with what we observed earlier in that dumping that attribute and base-64 decoding it gave us nothing decipherable.

Let’s summarize what we’ve done and what we’ve learned at this point.

  • The Azure Active Directory Password Protection Proxy Service has been installed in GIWSERVER5.
  • The cmdlet Register-AzureADPasswordProtectionProxy was run successfully.
  • When the Register-AzureADPasswordProtectionProxy was run the following actions took place:
    • GIWSERVER5 created a new public/private keypair
    • Proxy service performs discovery against Azure AD to discover the Password Protection endpoints for the tenant
    • Proxy service opened a connection to the Password Protection endpoints for the tenant leveraging the capabilities of the Azure AD Device Registration Service and submits a CSR which includes the public key it generated
    • The endpoint generates a certificate using the public key the proxy service provided and returns this to the proxy service computer
    • The proxy service combines the private key with the public key certificate and saves it to the C:\Program Files\Azure AD Password Protection Proxy\Data directory as a PPFXE file type
    • The proxy service connects to Windows Active Directory domain controller over LDAP port 389 using Kerberos for encryption and creates the following containers and service connection points:
      • CN=Azure AD Password Protection,CN=Configuration,DC=XXX,DC=XXX
      • CN=Forest Certs,CN=Azure AD Password Protection,CN=Configuration,DC=XXX,DC=XXX
        • Writes keyword attribute
      • CN=Proxy Presence,CN=Azure AD Password Protection,CN=Configuration,DC=XXX,DC=XXX
      • CN=AzureADPasswordProtectionProxy,CN=GIWSERVER5,CN=Computers,DC=XXX,DC=XXX
        • Writes signed JSON Web Token to msDS-Settings attribute for
        • Writes keyword attribute (can’t figure out what this does yet)
  • The cmdlet Register-AzureADPasswordProtectionForest was run successfully
  • When the Register-AzureADPasswordProtectionForest was run the following actions took place:
    • GIWSERVER5 created a new public/private keypair
    • Proxy service performs discovery against Azure AD to discover the Password Protection endpoints for the tenant
    • Proxy service opened a connection to the Password Protection endpoints for the tenant leveraging the capabilities of the Azure AD Device Registration Service and submits a CSR which includes the public key it generated
    • The endpoint generates a certificate using the public key the proxy service provided and returns this to the proxy service computer
    • The proxy service combines the private key with the public key certificate and saves it to the C:\Program Files\Azure AD Password Protection Proxy\Data directory as a PPFXE file type
    • The proxy service connects to Windows Active Directory domain controller over LDAP port 389 using Kerberos for encryption and creates the following containers:
      • CN=<UNIQUE IDENTIFIER>,CN=Forest Certs,CN=Azure AD Password Protection,CN=Configuration,DC=XXX,DC=XXX
        • Writes to msDS-Settings the encoded and encrypted certificate it received back from Azure AD including the private key
        • Writes to keyword attribute (not sure on this one either)

Based upon observation and review of the logs the proxy service creates when registering, I’m fairly certain the private key and certificate provisioned during the Register-AzureADPasswordProtectionProxy cmdlet is used by the proxy to make queries to Azure AD for updates on the banned passwords list.  Instead of storing the private key and certificate in the machine’s certificate store like most applications do, it stores them in a PPFXE file format.  I’m going to assume there is some symmetric key stored someone on the machine that is used to unlock the use of that information, but I couldn’t determine it with Rohitab API Monitor or Sysinternal Procmon.

I’m going to theorize the private key and certificate provisioned during the Register-AzureADPasswordProtectionForest cmdlet is going to be used by the DC agents to communicate with the proxy service.  This would make sense because the private key and certificate are stored in the directory and it would make for easy access by the domain controllers.  In my next post I’ll do a deep dive into the DC agent so I’ll have a chance to get more evidence to determine if the theory holds.

On a side note, I attempted to capture the web traffic between the proxy service and Azure AD once the service was installed and registered.  Unfortunately the proxy service doesn’t honor the system proxy even when it’s configured in the global machine.config.  I confirmed that the public preview of the proxy service doesn’t support the usage of a web proxy.  Hopefully we’ll see that when it goes general availability.

Have a great week.