Office 365 Groups Naming Policies – Part 1

Office 365 Groups Naming Policies – Part 1

Groups…  It’s a term every business user consuming technology has heard at some point in time.  Most users only experience groups when they’re unable to access a specific application or file and the coworker sitting next to them informs them they need to call IT and get added to the department group.  Those of us who work on the technology side of the fence are very familiar with the benefits groups bring to the table when controlling access to data.  We are also quite familiar with the challenges they can bring when managing them at scale.

Something as simple as a lack of an enforced naming convention can create serious pain for an organization if it relies heavily upon the naming convention to determine the function and owner of a group.  The pain bleeds through IT and into the business as workers struggle with long wait times for on-boarding new employees due to IT trying to determine which groups the users need to be in.  When it comes time to perform an access review, business owners may waste valuable time trying to determine if removing an employee from a specific group will impact that employee’s ability to fulfill their job responsibilities.

In the on-premises world organizations deal with the challenge of naming conventions in different ways.  Most rely upon first or second level help desk to create groups according to the organization’s naming standard.  This method introduces the risk of human error and presents challenges when the group information for a particular application is sourced from a variety of different identity backends which force the staff to learning multiple tools.  Others make use of identity management (IDM) systems that automate the creation of groups and enforce the naming convention.  This method is very effective but also very costly due to high costs in implementing and operating an IDM.  A very small minority of organizations have evolved to the point where the naming conventions are no longer important due to robust reporting systems and entitlement databases.

Very few organizations are able to successfully execute the third method, which leaves them with the first or second.  The introduction of the software-as-a-service (SaaS) has made the first and second methods of enforcing a naming convention much more complicated.  Using the first method of leveraging help desk staff to create the groups manually is no longer scalable and the second method of using a centralized IDM system is often limited by the vendor’s ability to write connectors to the wide variety of APIs in use across the thousands of SaaS vendors.  All is not lost, as it seems some vendors have begun to recognize the challenge this can introduce to their customers.

If your organization is a consumer of Office 365, you’ve more than likely begun to use Office 365 Groups.  Office 365 groups offer a variety of features not found in the traditional security/distribution group or shared mailbox.  Take a look at this link for a comparison chart that documents the features.  One important thing to note is Office 365 Groups can only be only created in Azure Active Directory (AAD).  You cannot synchronize an on-premises Active Directory Domain Services security or distribution group to AAD and convert it to an Office 365 Group.  This means you can’t leverage an existing solution for enforcing naming conventions unless that solution has a connector into Azure AD.  Given features Office 365 provide and that they are the construct used by Microsoft Teams, you may make the decision to allow your users to create Office 365 Groups on the fly in order to allow them to take full advantage of collaboration tools available in Office 365.  To quote Peter Venkman, “Human sacrifice, dogs and cats living together… mass hysteria!”.

Calm down my friend.  Microsoft has a solution coming in the pipeline that will solve your Office 365 Groups naming convention woes.  In my next post I’ll demonstrate the feature and walkthrough how to test the feature out while it is in preview.

Deep dive into AD FS and MS WAP – User Certificate Authentication through a WAP

Hi everyone,

Today I continue my series of posts that cover a behind the scenes look at how Active Directory Federation Service (AD FS) and the Microsoft Web Application Proxy (WAP) interact.  In my first post  I explained the business cases that would call for the usage of a WAP.  In my second post I did a deep dive into the WAP registration process (MS refers to this as the trust establishment with AD FS and the WAP).  In this post I decided to cover how user certificate authentication is achieved when AD FS server is placed behind the WAP.

AD FS offers a few different options to authenticate users to the service including Integrated Windows Authentication (IWA), forms-based authentication, and certificate authentication.  Readers who work in environments with sensitive data where assurance of a user’s identity is important should be familiar with certificate authentication in the Microsoft world.  If you’re unfamiliar with it I recommend you take a read through this Microsoft article.

With the recent release of the National Institute of Standards and Technology (NIST) Digital Identity Guidelines 800-63 which reworks the authenticator assurance levels (AAL) and relegates passwords to AAL1 only, organizations will be looking for other authenticator options.  Given the maturity of authenticators that make use of certificates such as the traditional smart card it’s likely many organizations will look at opportunities for how the existing equipment and infrastructure can be further utilized.  So all the more important we understand how AD FS certificate authentication works.

I’ll be using the lab I described in my first post.  I made the following modifications/additions to the lab:

  • Configure Active Directory Certificate Services (AD CS) certificate authority (CA) to include certificate revocation list (CRL) distribution point (CDP).  The CRLs will be served up via an IIS instance with the address crl.journeyofthegeek.com.  This is the only CDP listed in the certificates.  Certificates created during my original lab setup that are installed within the infrastructure do not include a CDP.
  • Added a non-domain-joined Windows 10 computer which be used as the endpoint the test user accesses the federation service from.

Tool-wise I used ProcMon, Fiddler, API Monitor, and WireShark.

So what did I discover?

Prior to doing any type of user interaction, I setup the tools I would be using moving forward.  On the WAP I started ProcMon as an Administrator and configured my filters to capture only TCP Send and TCP Receive operations.  I also setup WireShark using a filter of ip.addr==192.168.100.10 && tcp.port==80.  The IP address is the IP of the web server hosting my CRLs.  This would ensure I’d see the name of the process making the connection to the CDP as well as the conversation between the two nodes.

pic1

** Note that the machine will cache the CRLs after they are successfully downloaded from the CDP.  It will not make any further calls until the CRLs expire.  To get around this behavior while I was testing I ran the command certutil -setreg chain\ChainCacheResyncFiletime @now as outlined in this article.   This forces the machine to pull the CRLs again from the CDP regardless of whether or not they are expired.  I ran the command as the LOCAL SYSTEM security principal using psexec.

The final step was to start Fiddler as the NETWORK SERVICE security principal using the command psexec -i -u “NT AUTHORITY\Network Service” “C:\Program Files (x86)\Fiddler2\Fiddler.exe”.  Remember that Fiddler needs the public key certificate in the appropriate file location as I outlined in my last post.  Recall that the Web Application Proxy Service and the Active Directory Federation Service running on the WAP both run as that security principal.

Once all the tools were in place I logged into the non-domain joined Windows 10 box and opened up Microsoft Edge and popped the username of my test user into the username field.

pic2.png

After home realm discovery occurred within Azure AD, I received the forms-based login page of my AD FS instance.

 

pic3.png

Let’s take a look at what’s happened on the WAP so far.

In the initial HTTP Connect session the WAP makes to the AD FS farm, we see that the ClientHello handshake occurs where the WAP authenticates to the AD FS server to authenticate itself as described in my last post.

pic4.png

Once the secure session is established the WAP passes the HTTP GET request to the AD FS server.  It adds a number of headers to the request which AD FS consumes to identify the client is coming from the WAP.  This information is used for a number of AD FS features such as enforcing additional authentication policies for Extranet access.

pic5.png

The WAP also passes a number of query strings.  There are a few interesting query strings here.  The first is the client-request-id which is a unique identifier for the session that AD FS uses to correlate event log errors with the session.  The username is obvious and shows the user’s user principal name that was inputted in the username field at the O365 login page.  The wa query string shows a value of wsignin1.0 indicating the usage of WS-Federation.  The wtrealm indicates the relying party identifier of the application, in this case Azure AD.

pic6

The wctx query string is quite interesting and needs to be parsed a bit on its own.  Breaking down the value in the parameter we come across three unique parameters.

LoginOptions=3 indicates that the user has not selected the “Keep me signed in” option.  If the user had selected that checkbox a value of 1 would have been passed and AD FS would create a persistent cookie which would exist even after the browser closes.  This option is sometimes preferable for customers when opening documents from SharePoint Online so the user does not have to authenticate over and over.

The estsredirect contains the encoded and signed authentication request from O365.  I stared at API monitor for a few hours going API call by API call trying to identify what this looks like once it’s decoded, but was unsuccessful.  If you know how to decode it, I’d love to know.  I’m very curious as to its contents.

The WAP next makes another HTTP GET to the AD FS server this time including the additional query string of pullStatus which is set equal to 0.  I’m clueless as to the function on of this, I couldn’t find anything.  The only other thing that changes is the referer.

My best guess on the above two sessions is the first session is where AD FS performs home realm discovery and maybe some processing on to determine if there are any special configurations for the WAP such as limited or expanded authentication options (device authN, certAuthN only).  The second session is simply the AD FS server presenting the authentication methods configured for Extranet users.

The user then chooses the “Sign in with an X.509 certificate” (I’m not using SNI to host both forms and cert authN on the same port) and the WAP then performs another HTTP CONNECT to port 49443 which is the certificate authentication endpoint on the AD FS server.  It again authenticates to the AD FS server with its client certificate prior to establishing the secure tunnel.

The third session we see a HTTP POST to the AD FS server with the same query parameters as our previous request but also providing a JSON object with a key of AuthMethod and the key value combination of AuthMethod=CertificateAuthentication in the body.

pic7

The next session is another HTTP POST with the same JSON object content and the key value pairs of AuthMethod=CertificateAuthentication and RetrieveCertificate=1 in the body.  The AD FS server sends a 307 Temporary Redirect to the /adfs/backendproxytls/ endpoint on the AD FS server.

Prior to the redirect completing successful we see the calls to the CDP endpoint for the full and delta CRLs.

pic8.png

pic9

I was curious as to which process was pulling the CRLs and identified it was LSASS.EXE from the ProcMon capture.

pic10

At the /adfs/backendproxytls/ endpoint the WAP performs another HTTP POST this time posting a JSON object with a number of key value combinations.

pic11.png

The interesting key value types included in the JSON object are the nested JSON object for Headers which contains all the WAP headers I covered earlier.  The query string JSON object which contains all the query strings I covered earlier.  The SeralizedClientCertificate contains the certificate the user provided after selecting to use certificate authentication.  The AD FS server then sends back a cookie to the WAP.  This cookie is the cookie the representing the user’s authentication to the AD FS server as detailed in this link.

pic12.png

The WAP then performs a final HTTP GET back at the /adfs/ls/ endpoint including the previously described headers and query strings as well as provided the cookie it just received.  The AD FS server responds by providing the assertion requested by Microsoft along with a MSISAuthenticated, MSISSignOut, and MSISLoopDetectionCookie cookies which are described in the link above.

What did we learn?

  1. The certificate is checked at both the WAP and the AD FS server to ensure it is valid and issued from a trusted certificate authority.  Remember to verify you trust the certificate chain of any user certificates on both the AD FS servers and WAPs.
  2. CRL Revocation checking is enabled by default and is performed on both the AD FS server and the WAP.  Remember to verify the locations in your CDP are available by both devices.
  3. The AD FS servers use the LSALogonUser function in the secur32.dll library to perform standard certificate authentication to Active Directory Domain Services.  I didn’t include this, but I captured this by running API monitor on the AD FS server.

In short, if you’re going to use device authentication or user certificate authentication make sure you have your PKI components in order.

See you next post!

Deep dive into AD FS and MS WAP – WAP Registration

Hi everyone,

In today’s blog entry I’ll be doing a deep dive into how the Microsoft Web Application Proxy (WAP) established a trust with the Active Directory Federation Service (AD FS) (I’ll be referring to this as registration) in order to act as a reverse proxy for AD FS.  In my first entry into this series I covered the business use cases that would call for such an integration as well as providing an overview of the lab environment I’ll be using for the series.  So what does registration mean?  Well, the best way to describe it is to see it in action.

Figuring out how to capture the conversation took some trial and error.  This is where Sysinternals Process Explorer comes into play.  I went through the process of registering the WAP with AD FS using the Remote Access Management Console configuration utility and monitored the running processes with Process Explorer.  Upon reviewing the TCP/IP activity of the Remote Access Management Console process (RAMgmtUI.exe) I observed TCP connectivity to the AD FS farm.

RemoteReg

The process is running as the logged in user, in my case the administrator account I’ve configured.  This meant I would need to run Fiddler using the logged in user context rather than having to do some funky with running it as SYSTEM or another security principal using PSEXEC.

I started up Fiddler and configured it to intercept HTTPS traffic as per the configuration below.  Ensure that you’ve trusted the Fiddler root certificate so Fiddler can establish a man-in-the-middle (MITM) scenario.

fiddlerconfig.png

I next ran the Remote Access Management Console and initiated the Web Application Proxy Configuration wizard.   Here I ran the wizard a few different times specifying invalid credentials on the AD FS server to generate some web requests.  The web conversation below popped up Fiddler.

failedlog.png

Digging into the third session shows an HTTP POST to sts.journeyofthegeek.com/adfs/Proxy/EstablishTrust with a return code of 401 Unauthorized which we would expect given our application doesn’t know if authentication is required yet and didn’t specify an Authorization header.

estab1

Session four shows another HTTP POST to the same URL this time with an Authorization header specifying Basic authentication with our credentials Base64 encoded.  We receive another 401 because we have invalid credentials which again is expected.

succlog.png

What’s interesting is the JSON object being posted to the URL.  The JWT includes a key named SerializedTrustCertificate with a value of a Base64 encoded public-key certificate as the value.

json.png

Copy and pasting the encoded value to notepad and saving the file with a CER extension yields the certificate below of which the WAP has both the public and private key pairs.  The certificate is a 2048-bit key length self-signed certificate.

cert.png

At this point the WAP will attempt numerous connections to the /adfs/Proxy/GetConfiguration URL with a query string of api-version=2 as seen in the screenshot below.  It will receive a 401 back because Fiddler needs a copy of the client certificate to provide to the AD FS server.  At this point I let it time out and eventually the setup finished.

getconfig.png

So what does the configuration information look like from AD FS when it’s successfully retrieved?  So to see that we have to now pay attention to the Microsoft.IdentityServer.ProxyService.exe process which runs as the Active Directory Federation Services service (adfssrv).

adfservice.png

Since the process runs as Network Service I needed to get a bit creative in how I captured the conversation with Fiddler.  The first step is to export the public-key certificate for the self-signed certificate generated by the WAP, name it ClientCertificate.cer, and to store it in the Network Service profile folder in C:\Windows\ServiceProfiles\NetworkService\Documents\Fiddler2.   By doing this Fiddler will use that certificate for any website requiring client certificate authentication.

The next step was to start Fiddler as the Network Service security principal.  To do this I used PSEXEC with the following options:

Psexec -i -u “NT AUTHORITY\Network Service” “C:\Program Files (x86)\Fiddler2\Fiddler.exe.

I then restarted the Active Directory Federation Service on the WAP and boom there are our successful GET from the AD FS server at the /adfs/Proxy/GetConfiguration URL.

getconfigsuc.png

The WAP receives back a JSON object with all the configuration information for the AD FS server as seen below.  Much of this is information about endpoints the AD FS server is supporting.  Beyond that we get information the AD FS service configuration.  The WAP uses this configuration to setup its bindings with the HTTP.SYS kernel mode driver.  Yes the WAP uses HTTP.SYS in the same way AD FS uses it.

config1.png

config2.png

So what did we learn?  When establishing the trust with the AD FS server (I’m branding this registration 🙂 ) the WAP does the following:

  1. Generates a 2048-bit self-signed certificate
  2. Opens an HTTPS connection with an AD FS server
  3. Performs a POST on /adfs/Proxy/EstablishTrust providing a JSON object containing the public key certificate and authenticating to the AD FS server with the credentials provided with the wizard using Basic authentication.If the authentication is successful the AD FS server establishes the trust.  (I’ll dig into this piece in the next post)
  4. Performs a GET on /adfs/Proxy/GetConfiguration using the self-signed certificate to authenticate itself to the AD FS server.
  5. Consumes the configuration information and configures the appropriate endpoints with calls to HTTP.SYS.

So that’s the WAP side of the fence for establishing the trust.  In my next post I’ll briefly cover what goes on with the AD FS server as well as examining the LDAP calls (if any) to AD DS during the registration process.

See you next time!

A taste of the cloud with Symantec.cloud Email Security

Recently, I had a chance to demo three of Symantec’s cloud services: Symantec.cloud Email Security, Email Encryption, and Endpoint Security . Since there do not seem to be too many detailed reviews of Symantec’s cloud services floating around on the Net, I figured I relay my experiences with the service.

Today I will be talking about Symantec.cloud Email Security.

Symantec.cloud Email Security

Host based anti-malware is all well and good, but stopping malware from ever entering network is even better. That is where the Email Security portion of the Symantec’s cloud services comes in. The service aims to provide anti-spam, anti-virus, image control, and content control of email through a single cloud-based service managed using a web-based client portal.

Setup was fairly easy, it involved filing out some paperwork with information about the network and submitting it to Symantec to aid in the configuration of the client portal. After about two weeks, we were provided with an email with further instructions of how to configure the mail servers. The service requires directing the mail server to send all mail to Symantec through a TLS-encrypted connection, adjusting MX records to point to Symantec’s servers, and optionally locking down SMTP ports to allow traffic to and from only Symantec servers. Performing all three of these tasks has the added benefit of decreasing the attack surface of the network by locking down SMTP and providing additional confidentiality of email data as it flows between the company’s network and Symantec’s mail servers (I’ll talk more about this when I review the Encryption portion of the service). Symantec support replied quickly to any problems we ran into during the setup.

Once everything was in working order, we were provided access to the web-based portal. (click on images and select expand to full size)

The summary window provides graphical representations and statistics showing total incoming and outgoing email, emails containing viruses, spam, blocked images, and blocked content. It is pretty typical of the summary windows you encounter in any type of enterprise level security software, nothing too special. Although it is neat to watch the amount of spam rise and fall depending on the day of the week (it seems even spammers enjoy taking the weekend off.)

The anti-virus piece of the service utilizes Symantec’s vast database of virus signatures to detect and remove viruses from email. The added benefit of this is better detection of zero-day threats since you are not sitting waiting for the DATs to be released and pushed to your local gateway anti-malware solution. There is not much configuration to this portion of the service.

The anti-spam portion of the service uses typical anti-spam lists, as well as heuristics and a signature system. From our testing, the lists catch close to 50% of the spam using the lists and the heuristics and signature system each catching about 25%. We were really impressed with the effectiveness of the filter. Almost no spam made it through and we had a false positive only about 1 out of every 10,000 emails.

The configuration options are pretty typical of any anti-spam solution. Email can be sent to a quarantine hosted by Symantec, sent along through with an appended header, blocked, or sent to a bulk email address. The quarantine feature was pretty nice, as each user can be setup with access to his or her quarantined emails to restore any false positives. Another available option is to give a single user control over the quarantines of multiple users. The only downfall to this option is control of the quarantines cannot be shared among users. Hopefully that is a features Symantec will add in the future.

The content control feature of the service is really intense as can you can see from the screenshot below. I won’t go too in depth into it because I didn’t spend too much time playing with it. Suffice to say it is pretty awesome. Want to know if your users are sending personally identifiable information out over email without encrypting it? That can be done, even so far as scanning Microsoft Office and OCR’d PDFs. Suspect a certain users of sending company info out of the network without permission? Setup a rule to copy his or her email with specific attachment types to another email address for further review.

The system uses templates to detect patterns and the templates can be created by the user or with help by Symantec . We had Symantec help us create a template to detect bank account numbers and tested it by sending some Excel documents through the system with fictitious account numbers. The system caught the emails with the pattern and notified us of the email address used to send the email. These caught emails can be let through, tagged, logged, deleted, redirected to the administrator, or copied to the administrator. I can see this coming in handy when trying to prevent a data breach of confidential information or during an internal investigation of an employee’s email activities.

I didn’t play with the image control feature of the service. It looks to be as intense as the content control from the little that I looked at.

If you are in need of a customized report from the system, you can request Symantec to create one for you. I didn’t end up utilizing this feature, so I can’t say what the response time from support is.

Overall, I was really impressed with the service. The spam filter was amazing and the anti-virus seemed solid. I can think of a thousand usages for the content control and can’t wait to play with it further. Cost isn’t too bad, about $4,000 / year for 50 users. If you are looking to free up some server resources, consolidate software packages, and possibly increase your network security, Symantec’s cloud Email Security service is something to look at.