Active Directory Federation Services – SQL Attribute Store

Active Directory Federation Services – SQL Attribute Store

Hi everyone,

I recently had a use case come across my desk where I needed to do a SAML integration with a SaaS provider.  The provider required a number of pieces of information about the user beyond the standard unique identifier.  The additional information would be used to direct the user to the appropriate instance of the SaaS application.

In the past fifty or so SAML integrations I’ve done, I’ve been able to source my data directly from the Active Directory store.  This was because Active Directory was authoritative for the data or there was a reliable data synchronization process in place such that the data was being sourced from an authoritative source.  In this scenario, neither options was available.  Thankfully the data source I needed to hit to get the missing data exposed a subset of its data through a Microsoft SQL view.

I have done a lot in AD FS over the past few years from design to operational support of the service, but I had never sourced information from a data source hosted via MS SQL Server.  I reviewed the Microsoft documentation available via TechNet and found it to be lacking.  Further searches across MS blogs and third-party blogs provided a number of “bits” of information but no real end to end guide.  Given the lack of solid content, I decided it would be fun to put one together so off to Azure I went.

For the lab environment, I built the following:

  • Active Director forest name – geekintheweeds.com
  • Server 1 – SERVERDC (Windows Server 2016)
    • Active Directory Domain Services
    • Active Directory Domain Naming Services
    • Active Directory Certificate Services
  • Server 2 – SERVER-ADFS (Windows Server 2016)
    • Active Directory Federation Services
    • Microsoft SQL Server Express 2016
  • Server 3 – SERVER-WEB (Windows Server 2016)
    • Microsoft IIS

On SERVER-WEB I installed the sample claims application referenced here.  Make sure to follow the instructions in the blog to save yourself some headaches.  There are plenty of blogs out there that discuss building a lab consisting the of the services outlined above, so I won’t cover those details.

On SERVER-ADFS I created a database named hrdb within the same instance as the AD FS databases.  Within the database I created a table named dbo.EmployeeInfo with 5 columns named givenName, surName, email, userName, and role all of data type nvchar(MAX).  The userName column contained the unique values I used to relate a user object in Active Directory back to a record in the SQL database.

Screen Shot 2017-05-28 at 9.18.37 PM

Once the database was created and populated with some sample data and the appropriate Active Directory user objects were created, it was time to begin to configure the connectivity between AD FS and MS SQL.  Before we go creating the new attribute store, the AD FS service account needs appropriate permissions to access the SQL database.  I went the easy route and gave the service account the db_datareader role on the database, although the CONNECT and SELECT permissions would have probably been sufficient.

Screen Shot 2017-05-28 at 9.23.49 PM

After the service account was given appropriate permissions the next step was to configure it as an attribute store in AD FS.  To that I opened the AD FS management console, expanded the service node, and right-clicked on the Attribute Store and selected the Add Attribute Store option.  I used mysql  as the store name and selected SQL option from the drop-down box.  My SQL was a bit rusty so the connection string took a few tries to get right.

Screen Shot 2017-05-28 at 9.28.35 PM

I then created a new claim description to hold the role information I was pulling from the SQL database.

Screen Shot 2017-05-28 at 9.33.12 PM.png

The last step in the process was to create some claim rules to pull data from the SQL database.  Pulling data from a MS SQL datastore requires the use of custom claim rules.  If you’re unfamiliar with the custom claim language, the following two links are two of the best I’ve found on the net:

The first claim rule I created was a rule to query Active Directory via LDAP for the SAM-Account-Name attribute.  This is the attribute I would be using to query the SQL database for the user’s unique record.

Screen Shot 2017-05-28 at 9.42.05 PM.png

Next up I had my first custom claim rule where I queried the SQL database for the value in the userName column for the value of the SAM-Account-Name I pulled from earlier step and I requested back the value in the email column of the record that was returned. Since I wanted to do some transforming of the information in a later step, I added the claim to incoming claim set.

Screen Shot 2017-05-28 at 9.42.39 PM

I then issued another query for the value in the role column.

Screen Shot 2017-05-28 at 9.48.14 PM

Finally, I performed some transforms to verify I was getting the appropriate data that I wanted.  I converted the email address claim type to the Common Name type and the custom claim definition role I referenced above to the out of the box role claim definition.  I then hit the endpoint for the sample claim app and… VICTORY!

Screen Shot 2017-05-28 at 9.52.29 PM

Simple right?  Well it would be if this information had been documented within a single link.  Either way, I had some good lessons learned that I will share with you now:

  • Do NOT copy and paste claim rules.  I chased a number of red herrings trying to figure out why my claim rule was being rejected.  More than likely the copy/paste added an invalid character I was unable to see.
  • Brush up on your MS SQL before you attempt this.  My SQL was super rusty and it caused me to go down a number of paths which wasted time.  Thankfully, my worker Jeff Lee was there to add some brain power and help work through the issues.

Before I sign off, I want to thank my coworker Jeff Lee for helping out on this one.  It was a great learning experience for both of us.

Thanks and have a wonderful Memorial Day!

Azure AD Pass-through Authentication – How does it work? Part 2

Welcome back. Today I will be wrapping up my deep dive into Azure AD Pass-through authentication. If you haven’t already, take a read through part 1 for a background into the feature. Now let’s get to the good stuff.

I used a variety of tools to dig into the feature. Many of you will be familiar with the Sysinternals tools, WireShark, and Fiddler. The Rohitab API Monitor. This tool is extremely fun if you enjoy digging into the weeds of the libraries a process uses, the methods it calls, and the inputs and outputs. It’s a bit buggy, but check it out and give it a go.

As per usual, I built up a small lab in Azure with two Windows Server 2016 servers, one running AD DS and one running Azure AD Connect. When I installed Azure AD Connect I configured it to use pass-through authentication and to not synchronize the password. The selection of this option will the MS Azure Active Directory Application Proxy. A client certificate will also be issued to the server and is stored in the Computer Certificate store.

In order to capture the conversations and the API calls from the MS Azure Active Directory Application Proxy (ApplicationProxyConnectorService.exe) I set the service to run as SYSTEM. I then used PSEXEC to start both Fiddler and the API Monitor as SYSTEM as well. Keep in mind there is mutual authentication occurring during some of these steps between the ApplicationProxyConnectorService.exe and Azure, so the public-key client certificate will need to be copied to the following directories:

  • C:WindowsSysWOW64configsystemprofileDocumentsFiddler2
  • C:WindowsSystem32configsystemprofileDocumentsFiddler2

So with the basics of the configuration outlined, let’s cover what happens when the ApplicationProxyConnectorService.exe process is started.

  1. Using WireShark I observed the following DNS queries looking for an IP in order to connect to an endpoint for the bootstrap process of the MS AAD Application Proxy.DNS Query for TENANT ID.bootstrap.msappproxy.net
    DNS Response with CNAME of cwap-nam1-runtime.msappproxy.net
    DNS Response with CNAME of cwap-nam1-runtime-main-new.trafficmanager.net
    DNS Response with CNAME of cwap-cu-2.cloudapp.net
    DNS Response with A record of an IP
  2. Within Fiddler I observed the MS AAD Application Proxy establishing a connection to TENANT_ID.bootstrap.msappproxy.net over port 8080. It sets up a TLS 1.0 (yes TLS 1.0, tsk tsk Microsoft) session with mutual authentication. The client certificate that is used for authentication of the MS AAD Application Proxy is the certificate I mentioned above.
  3. Fiddler next displayed the MS AAD Application Proxy doing an HTTP POST of the XML content below to the ConnectorBootstrap URI. The key pieces of information provided here are the ConnectorID, MachineName, and SubscriptionID information. My best guess MS consumes this information to determine which URI to redirect the connector to and consumes some of the response information for telemetry purposes.Screen Shot 2017-04-05 at 9.37.04 PM
  4. Fiddler continues to provide details into the bootstrapping process. The MS AAD Application Proxy receives back the XML content provided below and a HTTP 307 Redirect to bootstrap.his.msappproxy.net:8080. My guess here is the process consumes this information to configure itself in preparation for interaction with the Azure Service Bus.Screen Shot 2017-04-05 at 9.37.48 PM
  5. WireShark captured the DNS queries below resolving the IP for the host the process was redirected to in the previous step.DNS Query for bootstrap.his.msappproxy.net
    DNS Response with CNAME of his-nam1-runtime-main.trafficmanager.net
    DNS Response with CNAME of his-eus-1.cloudapp.net
    DNS Response with A record of 104.211.32.215
  6. Back to Fiddler I observed the connection to bootstrap.his.msappproxy.net over port 8080 and setup of a TLS 1.0 session with mutual authentication using the client certificate again. The process does an HTTP POST of the XML content  below to the URI of ConnectorBootstrap?his_su=NAM1. More than likely this his_su variable was determined from the initial bootstrap to the tenant ID endpoint. The key pieces of information here are the ConnectorID, SubscriptionID, and telemetry information.
    Screen-Shot-2017-04-05-at-9.35.52-PM
  7. The next session capture shows the process received back the XML response below. The key pieces of content relevant here are within the SignalingListenerEndpointSettings section.. Interesting pieces of information here are:
    • Name – his-nam1-eus1/TENANTID_CONNECTORID
    • Namespace – his-nam1-eus1
    • ServicePath – TENANTID_UNIQUEIDENTIFIER
    • SharedAccessKey

    This information is used by the MS AAD Application Proxy to establish listeners to two separate service endpoints at the Azure Service Bus. The proxy uses the SharedAccessKeys to authenticate to authenticate to the endpoints. It will eventually use the relay service offered by the Azure Service Bus.

    Screen Shot 2017-04-05 at 9.34.43 PM

  8. WireShark captured the DNS queries below resolving the IP for the service bus endpoint provided above. This query is performed twice in order to set up the two separate tunnels to receive relays.DNS Query for his-nam1-eus1.servicebus.windows.net
    DNS Response with CNAME of ns-sb2-prod-bl3-009.cloudapp.net
    DNS Response with IP

    DNS Query for his-nam1-eus1.servicebus.windows.net
    DNS Response with CNAME of ns-sb2-prod-dm2-009.cloudapp.net
    DNS Response with different IP

  9. The MS AAD Application Proxy establishes connections with the two IPs received from above. These connections are established to port 5671. These two connections establish the MS AAD Application Proxy as a listener service with the Azure Service Bus to consume the relay services.
  10. At this point the MS AAD Application Proxy has connected to the Azure Service Bus to the his-nam1-cus1 namespace as a listener and is in the listen state. It’s prepared to receive requests from Azure AD (the sender), for verifications of authentication. We’ll cover this conversation a bit in the next few steps.When a synchronized user in the journeyofthegeek.com tenant accesses the Azure login screen and plugs in a set of credentials, Azure AD (the sender) connects to the relay and submits the authentication request. Like the initial MS AAD Application Proxy connection to the Azure Relay service, I was unable to capture the transactions in Fiddler. However, I was able to some of the conversation with API Monitor.

    I pieced this conversation together by reviewing API calls to the ncryptsslp.dll and looking at the output for the BCryptDecrypt method and input for the BCryptEncrypt method. While the data is ugly and the listeners have already been setup, we’re able to observe some of the conversation that occurs when the sender (Azure AD) sends messages to the listener (MS AAD Application Proxy) via the service (Azure Relay). Based upon what I was able to decrypt, it seems like one-way asynchronous communication where the MS AAD Application Proxy listens receives messages from Azure AD.

    Screen Shot 2017-04-05 at 9.38.40 PM

  11. The LogonUserW method is called from CLR.DLL and the user’s user account name, domain, and password is plugged. Upon a successful return and the authentication is valided, the MS AAD Application Proxy initiates an HTTP POST to
    his-eus-1.connector.his.msappproxy.net:10101/subscriber/connection?requestId=UNIQUEREQUESTID. The post contains a base64 encoded JWT with the information below. Unfortunately I was unable to decode the bytestream, so I can only guess what’s contained in there.{“JsonBytes”:[bytestream],”PrimarySignature”:[bytestream],”SecondarySignature”:null}

So what did we learn? Well we know that the Azure AD Pass-through authentication uses multiple Microsoft components including the MS AAD Application Proxy, Azure Service Bus (Relay), Azure AD, and Active Directory Domain Services. The authentication request is exchanged between Azure AD and the ApplicationProxyConnectorService.exe process running on the on-premises server via relay function of the Azure Service Bus.

The ApplicationProxyConnectorService.exe process authenticates to the URI where the bootstrap process occurs using a client certificate. After bootstrap the ApplicationProxyConnectorService.exe process obtains the shared access keys it will use to establish itself as a listener to the appropriate namespace in the Azure Relay. The process then establishes connection with the relay as a listener and waits for messages from Azure AD. When these messages are received, at the least the user’s password is encrypted with the public key of the client certificate (other data may be as well but I didn’t observe that).

When the messages are decrypted, the username, domain, and password is extracted and used to authenticate against AD DS. If the authentication is successful, a message is delivered back to Azure AD via the MS AAD Application Proxy service running in Azure.

Neato right? There are lots of moving parts behind this solution, but the finesse in which they’re integrated and executed make them practically invisible to the consumer. This is a solid out of the box solution and I can see why Microsoft markets in the way it does. I do have concerns that the solution is a bit of a black box and that companies leveraging it may not understand how troubleshoot issues that occur with it, but I guess that’s what Premier Services and Consulting Service is for right Microsoft? 🙂

Azure AD Pass-through Authentication – How does it work? Part 1

Hi everyone. I decided to take a break from the legacy and jump back to modern. Today I’m going to do some digging into Microsoft’s Azure AD Pass-through Authentication solution. The feature was introduced into public preview in December of 2016 and was touted as the simple and easy alternative to AD FS. Before I jump into the weeds of pass-through authentication, let’s do a high level overview of each option.

I will first cover the AD FS (Active Directory Federation Services) solution. When AD FS is used a solution for authentication to Azure Active Directory, it’s important to remember that AD FS is simply a product that enables the use of a technology to solve a business problem. In this instance the technology we are using is modern authentication (sometimes referred to as claims-based authentication) to solve the business problem of obtaining some level of assurance that a user is who they say they are.

When Azure AD and AD FS are integrated to enable the use of modern authentication, the Windows Services Federation Language (WS-FED) standard is used. You are welcome to read the standard for details, but the gist of WS-FED is a security token service generates logical security tokens (referred to assertions) which contain claims. The claims are typically pulled from a data store (such as Active Directory) and contain information about the user’s identity such as logon ID or email address. The data included in claims has evolved significantly over the past few years to include other data about the context of the user’s device (such as a trusted or untrusted device) and user’s location (coming from a trusted or untrusted IP range). The assertions are signed by the security token service (STS) and delivered to an application (referred to as the relying party) which validates the signature on the assertion, consumes the claims from the assertion, and authorizes the user access to the application.

You may have noticed above that we never talked about a user’s credentials. The reason for that is the user’s credentials aren’t included in the assertion. Prior to the STS generating the assertion, the user needs to authenticate to the STS. When AD FS is used, it’s common for the user to authenticate to the STS using Kerberos. Those of you that are familiar with Active Directory authentication know that a user obtains a Kerberos ticket-granting-ticket during workstation authentication to a domain-joined machine. When the user accesses AD FS (in this scenario the STS) the user provides a Kerberos service ticket. The process to obtain that service ticket, pass it to AD FS, getting an assertion, and passing that assertion back to the Azure AD (relying party in this scenario) is all seamless to the user and results in a true single sign-on experience. Additionally, there is no need to synchronize a user’s Active Directory Domain Services password to Azure AD, which your security folk will surely love.

The challenge presented with using AD FS as a solution is you have yet another service which requires on-premises infrastructure, must be highly available, and requires an understanding of the concepts I have explained above. In addition, if the service needs to be exposed to the internet and be accessible by non-domain joined machines, a reverse proxy (often Microsoft Web Application Proxy in the Microsoft world) which also requires more highly available infrastructure and the understanding of concepts such as split-brain DNS.

Now imagine you’re Microsoft and companies want to limit their on-premises infrastructure and the wider technology mark is slim in professionals that grasp all the concepts I have outlined above. What do you do? Well, you introduce a simple lightweight solution that requires little to no configuration or much understanding of what is actually happening. In come Azure AD Pass-through authentication.

Azure AD Pass-through authentication doesn’t require an STS or a reverse proxy. Nor does it require synchronization of a user’s Active Directory Domain Service password to Azure AD. It also doesn’t require making changes to any incoming flows in your network firewall. Sounds glorious right? Microsoft thinks this as well, hence why they’ve been pushing it so hard.

The user experience is very straightforward where the user plugs in their Active Directory Domain Services username and password at the Azure AD login screen. After the user hits the login screen, the user is logged in and go about their user way. Pretty fancy right? So how does Microsoft work this magic? It’s actually quite complicated but ingeniously implemented to seem incredibly simplistic.

The suspense is building right? Well, you’ll need to wait until my next entry to dig into the delicious details. We’ll be using a variety of tools including a simple packet capturing tool, a web proxy debugging tool, and an incredibly awesome API monitoring tool.

See you next post!

Digging deep into the AD DS workstation logon process – Part 3

Today I will continue my series of posts on the AD DS workstation logon process. In part 1 I covered the DC locator process and in part 2 a breakdown of the SMB conversation. In today’s post we’ll focus on the NetLogon process.

Let’s first take a few minutes to talk about the NetLogon Remote Protocol (MS-NRPC). The protocol serve a number of purposes in an Active Directory environment. Those purposes vary upon whether it is a member or domain controller making a remote call, but can be summed up as follows:

  1. Pass-through authentication – Deliver the logon request from the domain-joined machine to the domain controller for verification of the credentials and return of user authorization attributes (such as groups) back to the domain-joined machine.
  2. Pass-through authentication and domain trusts – Establish a secure channel to pass a logon request from a domain controller in the trusting domain to a domain controller in the trusted domain.
  3. Secure control maintenance – Allow the workstation to cycle its credential over a secure channel with the domain controller.
  4. Domain trust services – Allows applications to obtain a listing of domain trusts
  5. Message protection services – Used to authenticate the messages sent and received from a domain controller such as with the Windows Time service.
  6. Administrative services – Used for the following operations
    • NetLogon on Domain Controllers
      • Receive logon request
      • Determine account domain
      • Determine trust link
      • Find DC in trusted domain and setup secure channel
      • Deliver logon request
      • Return user validation data
    • NetLogon on Domain Members
      • Find a DC in its domain and setup secure channel
      • Logon requests flow over that channel going forward, including machine account password changes
  7. Account database replication – In Windows 2000 there existed the concept of a primary domain controller and backup domain controller. At that time NetLogon was used to replicate the account database between the BDC and PDC.

Since this series is focused on the workstation logon process, the sixth function for administrative services is functionality I’ll be breaking down in the remainder of the post. Here also is a link to the MS-NRPC specification which I used heavily to put together this post.

Shall we then?

  1. Source: Domain-joined machine
    Destination: Same Site or Closest Site Domain Controller
    Connection: TCP
    Port: 135
    Protocol: RPC
    Purpose: The domain-joined workstation initiates a request for a bind handle from the domain controller’s RCP EndPoint Mapper Service. The domain controller responds with a bind acknowledgement providing the information necessary for the workstation to now connect to the RCP Endpoint Mapper.
    Link:

  2. Source: Domain-joined machine
    Destination: Same Site or Closest Site Domain Controller
    Connection: TCP
    Port: 135
    Protocol: RPC
    Purpose: The domain-joined workstation queries the endpoint mapper on the domain controller for the endpoint that it can connect to in order to call the NRPC application. The domain controller responds with the IP and port information.

  3. Source: Domain-joined machine
    Destination: Same Site or Closest Site Domain Controller
    Connection: TCP
    Port: (Random port from DC’s dynamic port range)
    Protocol: RPC
    Purpose: The domain-joined workstation initiates a request for a bind handle for the NRPC application from the endpoint it received in the previous step. The domain controller responds with a bind acknowledgement providing the information necessary for the workstation to now connect to the NRPC application.

  4. Source: Domain-joined machine
    Destination: Same Site or Closest Site Domain Controller
    Connection: TCP
    Port: (Random port from DC’s dynamic port range)
    Protocol: RPC
    Purpose: The domain-joined workstation calls the NetrServerReqChallenge method and and provides the necessary variables. Three of those key variables are:

    1. PrimaryName – The name of the server receiving the call
    2. ComputerName – The name of the client making the call
    3. ClientChallenge – A 64-bit nonce generated by the client. I’ll explain how this is used later in the process.

    The domain controller returns the ServerChallenge, which is a 64-bit nonce generated by the server.
    Links:

  5. Source: Domain-joined machine
    Destination: Same Site or Closest Site Domain Controller
    Connection: TCP
    Port: (Random port from DC’s dynamic port range)
    Protocol: RPC
    Purpose: The domain-joined workstation calls the NetrServerAuthenticate3 method and and provides the necessary variables. The key variables here are:

    1. PrimaryName – The name of the server receiving the call
    2. AccountName – The name of the the account that contains the password of the computer account.
    3. SecureChannelType – Describes the type of channel being requested. In this case we are requesting a WorkstationSecureChannel.
    4. ComputerName – The name of the client making the call
    5. ClientCredential – The client credential is the encrypted 64-bit client challenge exchanged with the server at the previous step. The challenge is encrypted using the workstation’s secret key (computer account’s Active Directory credential). The encryption used will depend on what the client supports. In this scenario, with the default configuration on a Windows Server 2016, it is AES128.
    6. NegotiateFlags – Provides information about the workstation’s capabilities including encryption methods the client supports.

    The domain controller validates the client credential by accessing the workstation’s secret key (computer account’s Active Directory credential) and inputing it into the agreed upon encryption algorithm along with the client challenge received earlier. After the workstation’s identity is validated, the domain controller returns the following information:

    • Server Credential – The encrypted 64-bit server challenge exchanged with the workstation earlier in the process. The challenge is encrypted using the workstation’s secret key (computer account’s Active Directory credential). The encryption used will depend on what the client supports. In this scenario, with the default configuration on a Windows Server 2016, it is AES128.
    • NegotiateFlags – Provides information about the domain controller’s capabilities including encryption methods the client supports.
    • AccountRID – The workstation’s RID
    • ReturnValue – Success or Error

    The workstation validates the server credential by accessing its secret key (computer account’s Active Directory credential) and inputing it into the agreed upon encryption algorithm along with the server challenge received earlier. Once the credential is verified, the computer value will be used as the session key going forward for communication between the two nodes.
    Links:

  6. Source: Domain-joined machine
    Destination: Same Site or Closest Site Domain Controller
    Connection: TCP
    Port: (Random port from DC’s dynamic port range)
    Protocol: RPC
    Purpose: The domain-joined workstation requests that further calls to the NRPC application be authenticated and encrypted. The domain controller accepts the alteration in communication context.

  7. Source: Domain-joined machine
    Destination: Same Site or Closest Site Domain Controller
    Connection: TCP
    Port: (Random port from DC’s dynamic port range)
    Protocol: RPC
    Purpose: *This conversation is encrypted* The domain-joined workstation calls the NetrLogonGetCapabilities method. The domain controller returns the same capabilities it returned when NetrServerAuthenticate3 was called. The client then examines these capabilities to verify they match what was originally received to setup the secure session. This is performed to mitigate man-in-the-middle attacks.

  8. Source: Domain-joined machine
    Destination: Same Site or Closest Site Domain Controller
    Connection: TCP
    Port: (Random port from DC’s dynamic port range)
    Protocol: RPC
    Purpose: *This conversation is encrypted* The domain-joined workstation calls the NetrLogonGetDomainInfo method. The domain controller returns information about the domain the client belongs to. The information returned includes properties such as the DNS Domain Name, DNS Forest Name, and Domain SID.

  9. Source: Domain-joined machine
    Destination: Same Site or Closest Site Domain Controller
    Connection: TCP
    Port: 135
    Protocol: RPC
    Purpose: The domain-joined workstation queries the endpoint mapper on the domain controller for the endpoint that it can connect to in order to call the LSARpc application. The domain controller responds with the IP and port information.

  10. Source: Domain-joined machine
    Destination: Same Site or Closest Site Domain Controller
    Connection: TCP
    Port: 389
    Protocol: LDAP
    Purpose: The domain-joined workstation establishes a connection to the LDAP service on the domain controller and queries the RootDSE with the following query:

    • Filter: (objectclass Present)
    • Attributes: Attributes: ( subschemaSubentry )( dsServiceName )( namingContexts )( defaultNamingContext )( schemaNamingContext )( configurationNamingContext )( rootDomainNamingContext )( supportedControl )( supportedLDAPVersion )( supportedLDAPPolicies )( supportedSASLMechanimsms ) ( dnsHostName ) ( ldapServiceName ) ( serverName ) ( supportedCapabilities)

    The domain controller returns the information to the workstation.

  11. Source: Domain-joined machine
    Destination: Same Site or Closest Site Domain Controller
    Connection: TCP
    Port: (Random port from DC’s dynamic port range)
    Protocol: RPC
    Purpose: The domain-joined workstation initiates a request for a bind handle for the LSARpc application from the endpoint it received in the previous step. The domain controller responds with a bind acknowledgement providing the information necessary for the workstation to now connect to the NRPC application.

  12. Source: Domain-joined machine
    Destination: Same Site or Closest Site Domain Controller
    Connection: TCP
    Port: 88
    Protocol: Kerberos
    Purpose: The domain-joined machine attempts to verify its identity with the domain controller by sending a KRB-AS-REQ without pre-authentication data. The domain controller checks the object that represents the principal to determine if the account has the “Do not require Kerberos preauthentication.” If the option is not checked, the domain controller returns KRB_ERROR (25) indicating preauthentication data is required.

  13. Source: Domain-joined machine
    Destination: Same Site or Closest Site Domain Controller
    Connection: TCP
    Port: (Random port from DC’s dynamic port range)
    Protocol: RPC
    Purpose: *This conversation is encrypted* The domain-joined workstation calls the LsarLookupNames4 method. This method translates batches of security principals to their SID form. The domain controller responds by translates the principals to SIDs. The communication is encrypted with the session key established during the NetLogon process.

    I can’t find any clear cut information on this, but my guess is the workstation is attempting to resolve its own SID and perhaps the SID of the domain controller for processing of access control lists.

  14. Source: Domain-joined machine
    Destination: Same Site or Closest Site Domain Controller
    Connection: TCP
    Port: 88
    Protocol: Kerberos
    Purpose: The domain-joined machine re-attempts to verify its identity with the domain controller by sending a KRB-AS-REQ with pre-authentication data. The domain controller validates the principal’s identity and responds with a KRB-AS-REP which includes a Kerberos TGT for the principal to use to obtain additional Kerberos service tickets.

We’ll break here for today. In the next entry we’ll start to hit some of the LDAP conversation that goes on between the workstation and the domain controller. Much of that conversation is encrypted, so I’ll be doing my best to correlate the LDAP queries that appear in the Directory Services log back to the packet captures.

Thanks and see you then!