The Evolution of AD RMS to Azure Information Protection – Part 2 – Architecture

The Evolution of AD RMS to Azure Information Protection – Part 2 – Architecture

Hi there.  Welcome to the second post in my series exploring the evolution of Active Directory Rights Management Service (AD RMS) into Azure Information Protection (AIP).  In the first post of the series I gave an brief overview of the important role AIP plays in Microsoft’s Cloud App Security (CAS) offering.  I also covered the details of the lab I will be using for this series.  Take a few minutes to read the post to familiarize yourself with the lab details because it’ll be helpful as we progress through the series.

I went back and forth as to what topic I wanted to cover for the second post and decided it would be useful to start at the high level comparing the components in a typical Windows AD RMS implementation to those used when consuming AIP.  I’m going to keep the explanation of each component brief to avoid re-creating existing documentation, but I will provide links to official Microsoft document for each component I mention.  With that intro, let’s begin.

The infrastructure required in an AD RMS implementation is pretty minimal but the complexity is in how all of the components work together to provide the solution.  At a very high level it is similar to any other web-based application consisting of a web server, application code, and a data backend.   The web-based application integrated with a directory to authenticate users and get information about the user that is used in authorization decisions.  In the AD RMS world the components map to the following products:

  • Web Server – Machine running Windows Server with Microsoft Internet Information Services and Microsoft Message Queuing Service
  • Application Code – Code installed onto the machine after adding the AD RMS role to a machine running Windows Server
  • Data Backend – Machine running Windows Server with Microsoft SQL Server running on it hosting configuration and logging database (optionally Windows Internal Database (WID) for test environments)
  • Directory – Windows Active Directory provides authentication, user information used for authorization, and stores additional AD RMS configuration data (Service Connection Point)

Nodes providing the AD RMS service are organized into a logical container called an AD RMS Cluster. Like most web applications AD RMS can be scaled out by adding more nodes to the cluster to improve performance and provide high availability (HA).  If using MS SQL for the data backend, traditional methods of HA can be used such as SQL clustering, database mirroring, and log shipping.  You can plop your favorite load balancer in front of the solution to help distribute the application load and keep track of the health of the nodes providing the service.

Beyond the standard web-based application components we have some that are specific to AD RMS.  Let’s take a deeper look at them.

  • AD RMS Cluster Key

    The AD RMS cluster key is the most critical part of an AD RMS implementation, the “key to the kingdom”, as it is used to sign the Server Licensor Certificate (SLC) which contains the corresponding public key.  The SLC is used to sign certificates created by AD RMS that allow for consumption of AD RMS-protected content as well as being used by AD RMS clients to encrypt the content key when a document is newly protected by AD RMS.

    The AD RMS cluster key is shared by all nodes that are members of the AD RMS cluster.  It can be stored within the MS SQL database/WID or on a supported hardware security module for improved security.

  • AD RMS Client / AD RMS-Integrated Server Applications

    Applications are great, but you need a method to consume them.  Once content is protected by AD RMS it can only be consumed by an application capable of communicating with AD RMS.  In most cases this is accomplished by using an application that has been written to use the AD RMS Client.  The AD RMS Client comes pre-installed on Windows Vista and up desktop operating systems and Windows Server 2008 and up server operating systems.

    The AD RMS client performs tasks such as bootstrapping (sometimes referred to as activation).  I won’t go into the details because I wouldn’t do near as well job as Dan does in the bootstrapping link.  In short it generates some keys and obtains some certificates from the AD RMS service that facilitate protecting and consuming content.

    AD RMS-integrated server applications such as Microsoft SharePoint Server and Microsoft Exchange Server provide server-level services that leverage the capabilities provided by AD RMS to protect data such as files stored in a SharePoint library or emails sent through Microsoft Exchange.

  • AD RMS Policy Templates

    While not a component of the system architecture, AD RMS Policy Templates are an AD RMS concept that deserves mention in this discussion.  The templates can be created by an organization to provide a standard set of use rights applicable to a type of data.  Common use cases are having multiple templates created for different data types.  For example, you may want one data type that allows trusted partners to view the document but not print or forward it while another template may restrict view rights to the accounting department.

    In AD RMS the policies are stored in the AD RMS database but are accessible via a call to the web service.  Optionally they can be exported from the database and distributed in other means like a Windows file share.

As you can see there are a lot of moving parts to an on-premises Windows AD RMS implementation.  Some of the components mentioned above can get even more complicated when the need to collaborate across organizations or support mobile devices arises.

How does AIP compare?  For the purposes of this post, I’m going to focus that comparison on Azure RMS which provides the protection capability of AIP.  Azure RMS is a software-as-a-service (SaaS) offering from Microsoft replaces (yes Microsoft, let’s be honest here) AD RMS.  It is licensed on a per user basis via a stand-alone, Enterprise Mobility + Security P1/P2, or qualifying Office 365 license.

The architecture of Azure RMS is far more simple than what existed for AD RMS.  Like most SaaS services, there is no on-premises infrastructure required except in very specific scenarios such as hold-your-own-key (HYOK) or integrating Azure RMS with an on-premises Microsoft Exchange Server, Microsoft SharePoint Server, or servers running Windows Server and File Classification Infrastructure (FCI) using the RMS Connector.   This means you won’t be building any servers to hold the RMS role or SQL Servers to host configuration and logging information.  The infrastructure is now managed by Microsoft and the RMS service provided over HTTP/HTTPS.

Azure RMS shifts its directory dependency to Azure Active Directory (AAD).  It uses the tenant in which the Azure RMS licenses are associated with for authentication and authorization of users.  As with any AAD use case, you can still authenticate users against your on-premises Windows Active Directory if you’ve configured your tenant for federated authentication and source data from an on-premises directory using Azure Active Directory Connect.

The cluster key, client, integrated applications, and policies are still in place and work similar to on-premises AD RMS with some changes to both function and names.

  • Azure Information Protection Tenant Key

    The AD RMS Cluster key has been renamed to the Azure Information Protection tenant key.  The tenant key serves the same purpose as the AD RMS Cluster and is used to sign the SLC certificate and decrypt information sent to Azure RMS using the public key in the SLC.  The differences between the two are really around how the key is generated and stored.  By default the tenant key is generated (note that Microsoft generates a 2048-bit key instead of a 1024-bit like was done with new installations of AD RMS) by Microsoft and is associated with your Azure Active Directory tenant.  Other options include bring-your-own-key (BYOK), HYOK, and a special instance where you are migrating from AD RMS to Azure RMS.  I’ll cover HYOK and the migration instance in future posts.

  • Azure Information Protection Client

    The AD RMS client is replaced with the Azure Information Protection Client.  The client performs the same functions as the AD RMS Client but allowing for integration with either on-premises AD RMS or Azure RMS.  In addition, the client introduces functionality around Azure Information Protection including adding a classification bar for Microsoft Office, Do Not Forward button to Microsoft Outlook, option in Windows File Explorer to classify and protect files, and PowerShell modules that can be used to bulk classify and protect files.  In a future post in this series I’ll be doing a deep dive of the client behavior including analysis of its calls to the Azure Information Protection endpoints via Fiddler.

    Unlike the AD RMS client of the past, the Azure Information Protection Client is supported on mobile operating systems such as iOS and Android.  Additionally, it supports a wider variety of file types than the AD RMS client supported.

  • Azure RMS-Integrated Server Applications

    Like its predecessor Azure RMS can be consumed by server applications such as Microsoft Exchange Server and Microsoft SharePoint Server with the RMS Connector.  There is native integration with Office 365 products including Exchange Online, SharePoint Online, OneDrive for Business, as well as being extensible to third-party applications via Cloud App Security (I’ll demonstrate this after I complete this series).  Like all good SaaS, there is also an API that can be leveraged to add the functionality to custom developed applications.

  • Rights Management Templates

    Azure RMS continues to use concepts of rights management templates like its predecessor.  Instead of being stored in a SQL database, the templates are stored in Microsoft’s cloud.  Templates created in AD RMS can also be imported into Azure RMS for continued use.  I’ll demonstrate how that process in a future post in this series.  Classification labels in AIP are backed by templates whenever a label applies protection with a pre-defined set of rights.  I’ll demonstrate this in a later post.

Far more simple in the SaaS world isn’t it?  In addition to simplicity Microsoft delivers more capabilities, tighter integration with its collaboration tools, and expansion of the capabilities to third party applies through a robust API and integration with Cloud App Security.

See you next post!

Deep dive into AD FS and MS WAP – User Certificate Authentication through a WAP

Hi everyone,

Today I continue my series of posts that cover a behind the scenes look at how Active Directory Federation Service (AD FS) and the Microsoft Web Application Proxy (WAP) interact.  In my first post  I explained the business cases that would call for the usage of a WAP.  In my second post I did a deep dive into the WAP registration process (MS refers to this as the trust establishment with AD FS and the WAP).  In this post I decided to cover how user certificate authentication is achieved when AD FS server is placed behind the WAP.

AD FS offers a few different options to authenticate users to the service including Integrated Windows Authentication (IWA), forms-based authentication, and certificate authentication.  Readers who work in environments with sensitive data where assurance of a user’s identity is important should be familiar with certificate authentication in the Microsoft world.  If you’re unfamiliar with it I recommend you take a read through this Microsoft article.

With the recent release of the National Institute of Standards and Technology (NIST) Digital Identity Guidelines 800-63 which reworks the authenticator assurance levels (AAL) and relegates passwords to AAL1 only, organizations will be looking for other authenticator options.  Given the maturity of authenticators that make use of certificates such as the traditional smart card it’s likely many organizations will look at opportunities for how the existing equipment and infrastructure can be further utilized.  So all the more important we understand how AD FS certificate authentication works.

I’ll be using the lab I described in my first post.  I made the following modifications/additions to the lab:

  • Configure Active Directory Certificate Services (AD CS) certificate authority (CA) to include certificate revocation list (CRL) distribution point (CDP).  The CRLs will be served up via an IIS instance with the address crl.journeyofthegeek.com.  This is the only CDP listed in the certificates.  Certificates created during my original lab setup that are installed within the infrastructure do not include a CDP.
  • Added a non-domain-joined Windows 10 computer which be used as the endpoint the test user accesses the federation service from.

Tool-wise I used ProcMon, Fiddler, API Monitor, and WireShark.

So what did I discover?

Prior to doing any type of user interaction, I setup the tools I would be using moving forward.  On the WAP I started ProcMon as an Administrator and configured my filters to capture only TCP Send and TCP Receive operations.  I also setup WireShark using a filter of ip.addr==192.168.100.10 && tcp.port==80.  The IP address is the IP of the web server hosting my CRLs.  This would ensure I’d see the name of the process making the connection to the CDP as well as the conversation between the two nodes.

pic1

** Note that the machine will cache the CRLs after they are successfully downloaded from the CDP.  It will not make any further calls until the CRLs expire.  To get around this behavior while I was testing I ran the command certutil -setreg chain\ChainCacheResyncFiletime @now as outlined in this article.   This forces the machine to pull the CRLs again from the CDP regardless of whether or not they are expired.  I ran the command as the LOCAL SYSTEM security principal using psexec.

The final step was to start Fiddler as the NETWORK SERVICE security principal using the command psexec -i -u “NT AUTHORITY\Network Service” “C:\Program Files (x86)\Fiddler2\Fiddler.exe”.  Remember that Fiddler needs the public key certificate in the appropriate file location as I outlined in my last post.  Recall that the Web Application Proxy Service and the Active Directory Federation Service running on the WAP both run as that security principal.

Once all the tools were in place I logged into the non-domain joined Windows 10 box and opened up Microsoft Edge and popped the username of my test user into the username field.

pic2.png

After home realm discovery occurred within Azure AD, I received the forms-based login page of my AD FS instance.

 

pic3.png

Let’s take a look at what’s happened on the WAP so far.

In the initial HTTP Connect session the WAP makes to the AD FS farm, we see that the ClientHello handshake occurs where the WAP authenticates to the AD FS server to authenticate itself as described in my last post.

pic4.png

Once the secure session is established the WAP passes the HTTP GET request to the AD FS server.  It adds a number of headers to the request which AD FS consumes to identify the client is coming from the WAP.  This information is used for a number of AD FS features such as enforcing additional authentication policies for Extranet access.

pic5.png

The WAP also passes a number of query strings.  There are a few interesting query strings here.  The first is the client-request-id which is a unique identifier for the session that AD FS uses to correlate event log errors with the session.  The username is obvious and shows the user’s user principal name that was inputted in the username field at the O365 login page.  The wa query string shows a value of wsignin1.0 indicating the usage of WS-Federation.  The wtrealm indicates the relying party identifier of the application, in this case Azure AD.

pic6

The wctx query string is quite interesting and needs to be parsed a bit on its own.  Breaking down the value in the parameter we come across three unique parameters.

LoginOptions=3 indicates that the user has not selected the “Keep me signed in” option.  If the user had selected that checkbox a value of 1 would have been passed and AD FS would create a persistent cookie which would exist even after the browser closes.  This option is sometimes preferable for customers when opening documents from SharePoint Online so the user does not have to authenticate over and over.

The estsredirect contains the encoded and signed authentication request from O365.  I stared at API monitor for a few hours going API call by API call trying to identify what this looks like once it’s decoded, but was unsuccessful.  If you know how to decode it, I’d love to know.  I’m very curious as to its contents.

The WAP next makes another HTTP GET to the AD FS server this time including the additional query string of pullStatus which is set equal to 0.  I’m clueless as to the function on of this, I couldn’t find anything.  The only other thing that changes is the referer.

My best guess on the above two sessions is the first session is where AD FS performs home realm discovery and maybe some processing on to determine if there are any special configurations for the WAP such as limited or expanded authentication options (device authN, certAuthN only).  The second session is simply the AD FS server presenting the authentication methods configured for Extranet users.

The user then chooses the “Sign in with an X.509 certificate” (I’m not using SNI to host both forms and cert authN on the same port) and the WAP then performs another HTTP CONNECT to port 49443 which is the certificate authentication endpoint on the AD FS server.  It again authenticates to the AD FS server with its client certificate prior to establishing the secure tunnel.

The third session we see a HTTP POST to the AD FS server with the same query parameters as our previous request but also providing a JSON object with a key of AuthMethod and the key value combination of AuthMethod=CertificateAuthentication in the body.

pic7

The next session is another HTTP POST with the same JSON object content and the key value pairs of AuthMethod=CertificateAuthentication and RetrieveCertificate=1 in the body.  The AD FS server sends a 307 Temporary Redirect to the /adfs/backendproxytls/ endpoint on the AD FS server.

Prior to the redirect completing successful we see the calls to the CDP endpoint for the full and delta CRLs.

pic8.png

pic9

I was curious as to which process was pulling the CRLs and identified it was LSASS.EXE from the ProcMon capture.

pic10

At the /adfs/backendproxytls/ endpoint the WAP performs another HTTP POST this time posting a JSON object with a number of key value combinations.

pic11.png

The interesting key value types included in the JSON object are the nested JSON object for Headers which contains all the WAP headers I covered earlier.  The query string JSON object which contains all the query strings I covered earlier.  The SeralizedClientCertificate contains the certificate the user provided after selecting to use certificate authentication.  The AD FS server then sends back a cookie to the WAP.  This cookie is the cookie the representing the user’s authentication to the AD FS server as detailed in this link.

pic12.png

The WAP then performs a final HTTP GET back at the /adfs/ls/ endpoint including the previously described headers and query strings as well as provided the cookie it just received.  The AD FS server responds by providing the assertion requested by Microsoft along with a MSISAuthenticated, MSISSignOut, and MSISLoopDetectionCookie cookies which are described in the link above.

What did we learn?

  1. The certificate is checked at both the WAP and the AD FS server to ensure it is valid and issued from a trusted certificate authority.  Remember to verify you trust the certificate chain of any user certificates on both the AD FS servers and WAPs.
  2. CRL Revocation checking is enabled by default and is performed on both the AD FS server and the WAP.  Remember to verify the locations in your CDP are available by both devices.
  3. The AD FS servers use the LSALogonUser function in the secur32.dll library to perform standard certificate authentication to Active Directory Domain Services.  I didn’t include this, but I captured this by running API monitor on the AD FS server.

In short, if you’re going to use device authentication or user certificate authentication make sure you have your PKI components in order.

See you next post!

Active Directory Federation Services – SQL Attribute Store

Active Directory Federation Services – SQL Attribute Store

Hi everyone,

I recently had a use case come across my desk where I needed to do a SAML integration with a SaaS provider.  The provider required a number of pieces of information about the user beyond the standard unique identifier.  The additional information would be used to direct the user to the appropriate instance of the SaaS application.

In the past fifty or so SAML integrations I’ve done, I’ve been able to source my data directly from the Active Directory store.  This was because Active Directory was authoritative for the data or there was a reliable data synchronization process in place such that the data was being sourced from an authoritative source.  In this scenario, neither options was available.  Thankfully the data source I needed to hit to get the missing data exposed a subset of its data through a Microsoft SQL view.

I have done a lot in AD FS over the past few years from design to operational support of the service, but I had never sourced information from a data source hosted via MS SQL Server.  I reviewed the Microsoft documentation available via TechNet and found it to be lacking.  Further searches across MS blogs and third-party blogs provided a number of “bits” of information but no real end to end guide.  Given the lack of solid content, I decided it would be fun to put one together so off to Azure I went.

For the lab environment, I built the following:

  • Active Director forest name – geekintheweeds.com
  • Server 1 – SERVERDC (Windows Server 2016)
    • Active Directory Domain Services
    • Active Directory Domain Naming Services
    • Active Directory Certificate Services
  • Server 2 – SERVER-ADFS (Windows Server 2016)
    • Active Directory Federation Services
    • Microsoft SQL Server Express 2016
  • Server 3 – SERVER-WEB (Windows Server 2016)
    • Microsoft IIS

On SERVER-WEB I installed the sample claims application referenced here.  Make sure to follow the instructions in the blog to save yourself some headaches.  There are plenty of blogs out there that discuss building a lab consisting the of the services outlined above, so I won’t cover those details.

On SERVER-ADFS I created a database named hrdb within the same instance as the AD FS databases.  Within the database I created a table named dbo.EmployeeInfo with 5 columns named givenName, surName, email, userName, and role all of data type nvchar(MAX).  The userName column contained the unique values I used to relate a user object in Active Directory back to a record in the SQL database.

Screen Shot 2017-05-28 at 9.18.37 PM

Once the database was created and populated with some sample data and the appropriate Active Directory user objects were created, it was time to begin to configure the connectivity between AD FS and MS SQL.  Before we go creating the new attribute store, the AD FS service account needs appropriate permissions to access the SQL database.  I went the easy route and gave the service account the db_datareader role on the database, although the CONNECT and SELECT permissions would have probably been sufficient.

Screen Shot 2017-05-28 at 9.23.49 PM

After the service account was given appropriate permissions the next step was to configure it as an attribute store in AD FS.  To that I opened the AD FS management console, expanded the service node, and right-clicked on the Attribute Store and selected the Add Attribute Store option.  I used mysql  as the store name and selected SQL option from the drop-down box.  My SQL was a bit rusty so the connection string took a few tries to get right.

Screen Shot 2017-05-28 at 9.28.35 PM

I then created a new claim description to hold the role information I was pulling from the SQL database.

Screen Shot 2017-05-28 at 9.33.12 PM.png

The last step in the process was to create some claim rules to pull data from the SQL database.  Pulling data from a MS SQL datastore requires the use of custom claim rules.  If you’re unfamiliar with the custom claim language, the following two links are two of the best I’ve found on the net:

The first claim rule I created was a rule to query Active Directory via LDAP for the SAM-Account-Name attribute.  This is the attribute I would be using to query the SQL database for the user’s unique record.

Screen Shot 2017-05-28 at 9.42.05 PM.png

Next up I had my first custom claim rule where I queried the SQL database for the value in the userName column for the value of the SAM-Account-Name I pulled from earlier step and I requested back the value in the email column of the record that was returned. Since I wanted to do some transforming of the information in a later step, I added the claim to incoming claim set.

Screen Shot 2017-05-28 at 9.42.39 PM

I then issued another query for the value in the role column.

Screen Shot 2017-05-28 at 9.48.14 PM

Finally, I performed some transforms to verify I was getting the appropriate data that I wanted.  I converted the email address claim type to the Common Name type and the custom claim definition role I referenced above to the out of the box role claim definition.  I then hit the endpoint for the sample claim app and… VICTORY!

Screen Shot 2017-05-28 at 9.52.29 PM

Simple right?  Well it would be if this information had been documented within a single link.  Either way, I had some good lessons learned that I will share with you now:

  • Do NOT copy and paste claim rules.  I chased a number of red herrings trying to figure out why my claim rule was being rejected.  More than likely the copy/paste added an invalid character I was unable to see.
  • Brush up on your MS SQL before you attempt this.  My SQL was super rusty and it caused me to go down a number of paths which wasted time.  Thankfully, my worker Jeff Lee was there to add some brain power and help work through the issues.

Before I sign off, I want to thank my coworker Jeff Lee for helping out on this one.  It was a great learning experience for both of us.

Thanks and have a wonderful Memorial Day!

A taste of the cloud with Symantec.cloud Email Security

Recently, I had a chance to demo three of Symantec’s cloud services: Symantec.cloud Email Security, Email Encryption, and Endpoint Security . Since there do not seem to be too many detailed reviews of Symantec’s cloud services floating around on the Net, I figured I relay my experiences with the service.

Today I will be talking about Symantec.cloud Email Security.

Symantec.cloud Email Security

Host based anti-malware is all well and good, but stopping malware from ever entering network is even better. That is where the Email Security portion of the Symantec’s cloud services comes in. The service aims to provide anti-spam, anti-virus, image control, and content control of email through a single cloud-based service managed using a web-based client portal.

Setup was fairly easy, it involved filing out some paperwork with information about the network and submitting it to Symantec to aid in the configuration of the client portal. After about two weeks, we were provided with an email with further instructions of how to configure the mail servers. The service requires directing the mail server to send all mail to Symantec through a TLS-encrypted connection, adjusting MX records to point to Symantec’s servers, and optionally locking down SMTP ports to allow traffic to and from only Symantec servers. Performing all three of these tasks has the added benefit of decreasing the attack surface of the network by locking down SMTP and providing additional confidentiality of email data as it flows between the company’s network and Symantec’s mail servers (I’ll talk more about this when I review the Encryption portion of the service). Symantec support replied quickly to any problems we ran into during the setup.

Once everything was in working order, we were provided access to the web-based portal. (click on images and select expand to full size)

The summary window provides graphical representations and statistics showing total incoming and outgoing email, emails containing viruses, spam, blocked images, and blocked content. It is pretty typical of the summary windows you encounter in any type of enterprise level security software, nothing too special. Although it is neat to watch the amount of spam rise and fall depending on the day of the week (it seems even spammers enjoy taking the weekend off.)

The anti-virus piece of the service utilizes Symantec’s vast database of virus signatures to detect and remove viruses from email. The added benefit of this is better detection of zero-day threats since you are not sitting waiting for the DATs to be released and pushed to your local gateway anti-malware solution. There is not much configuration to this portion of the service.

The anti-spam portion of the service uses typical anti-spam lists, as well as heuristics and a signature system. From our testing, the lists catch close to 50% of the spam using the lists and the heuristics and signature system each catching about 25%. We were really impressed with the effectiveness of the filter. Almost no spam made it through and we had a false positive only about 1 out of every 10,000 emails.

The configuration options are pretty typical of any anti-spam solution. Email can be sent to a quarantine hosted by Symantec, sent along through with an appended header, blocked, or sent to a bulk email address. The quarantine feature was pretty nice, as each user can be setup with access to his or her quarantined emails to restore any false positives. Another available option is to give a single user control over the quarantines of multiple users. The only downfall to this option is control of the quarantines cannot be shared among users. Hopefully that is a features Symantec will add in the future.

The content control feature of the service is really intense as can you can see from the screenshot below. I won’t go too in depth into it because I didn’t spend too much time playing with it. Suffice to say it is pretty awesome. Want to know if your users are sending personally identifiable information out over email without encrypting it? That can be done, even so far as scanning Microsoft Office and OCR’d PDFs. Suspect a certain users of sending company info out of the network without permission? Setup a rule to copy his or her email with specific attachment types to another email address for further review.

The system uses templates to detect patterns and the templates can be created by the user or with help by Symantec . We had Symantec help us create a template to detect bank account numbers and tested it by sending some Excel documents through the system with fictitious account numbers. The system caught the emails with the pattern and notified us of the email address used to send the email. These caught emails can be let through, tagged, logged, deleted, redirected to the administrator, or copied to the administrator. I can see this coming in handy when trying to prevent a data breach of confidential information or during an internal investigation of an employee’s email activities.

I didn’t play with the image control feature of the service. It looks to be as intense as the content control from the little that I looked at.

If you are in need of a customized report from the system, you can request Symantec to create one for you. I didn’t end up utilizing this feature, so I can’t say what the response time from support is.

Overall, I was really impressed with the service. The spam filter was amazing and the anti-virus seemed solid. I can think of a thousand usages for the content control and can’t wait to play with it further. Cost isn’t too bad, about $4,000 / year for 50 users. If you are looking to free up some server resources, consolidate software packages, and possibly increase your network security, Symantec’s cloud Email Security service is something to look at.