Azure Authorization – Azure ABAC (Attribute-based Access Control)

This is part of my series on Azure Authorization.

  1. Azure Authorization – The Basics
  2. Azure Authorization – Azure RBAC Basics
  3. Azure Authorization – actions and notActions
  4. Azure Authorization – Resource Locks and Azure Policy denyActions
  5. Azure Authorization – Azure RBAC Delegation
  6. Azure Authorization – Azure ABAC (Attribute-based Access Control)

Welcome back fellow geeks.

I do a lot of learning and educational sessions with my customer base. The volume pretty much demands reusable content which means I gotta build decks and code samples… and worse maintain them. The maintenance piece typically consists of me mentally promising myself to update the content and kicking the can down the road for a few months. Eventually, I get around to updating the content.

This month I was doing some updates to my content around Azure Authorization and decided to spend a bit more time with Azure ABAC (Attribute-based access control). For those of you unfamiliar with Azure ABAC, well it’s no surprise because the use cases are so very limited as of today. Limited as the use cases are, it’s a worthwhile functionality to understand because Microsoft does use it in its products and you may have use cases where it makes sense.

The Dream of ABAC

Let’s first touch briefly on the differences between (RBAC) role-based access control and (ABAC) attribute-based access control. Attribute-based access control has been the dream for the security industry for as long as I can remember. RBAC has been the predominant authorization mechanism in a majority of applications over the years. The challenge with RBAC is it has typically translated to basic group membership where an application authorizes a user solely on whether or not the user is in a group. Access to these groups would typically come through some type of request for membership and implementation by a central governance team. Those processes have tended to be not super user friendly and the access has tended to be very course-grained.

ABAC meanwhile promised more fine-grained access based upon attributes of the security principal, resource, or whatever your mind can dream up. Sounds awesome right? Well it is, but it largely remained a dream in the mainstream world with a few attempts such as Windows Dynamic Access Control (Before you comment, yeah I get you may have had some cool apps doing this stuff years ago and that is awesome, but let’s stick with the majority). This began to change when cloud came around with the introduction of more modern protocols and standards such as SAML, OIDC, and OAuth. These protocols provide more flexibility with how the identity provider packages attributes about the user in the token delivered to the service provider/resource provider/what have you.

When it came to the Azure cloud, Microsoft went the traditional RBAC path for much of the platform. User or group gets placed in Azure RBAC role and user(s) gets access. I explain Azure RBAC in my other posts on RBAC. There is a bit of flexibility on the Entra ID side for the initial access token via Entra ID Conditional Access, but RBAC in the Azure realm. This was the story for many years of Azure.

In 2021 Microsoft decided something more flexible was needed and introduced Azure ABAC. The world rejoiced… right? Nah, not really. While the introduction of ABAC was awesome, its scope of use was and still is extremely limited. As of the date of this blog, ABAC is only usable for Azure Storage blob and queue operations. All is not lost though, there are some great use cases for this feature so it’s important to understand how it works.

How does ABAC work?

Alright, history lesson and complaining about limited scope aside, let’s now explore how the feature works.

ABAC is facilitated through an additional property on Azure RBAC Role Assignment resources. I’m going to assume you understand the ins and out of role assignments. If you don’t, check out my prior post on the topic. In its most simple sense, an Azure RBAC role assignment is the association of a role to a security principal granting that principal the permissions defined in the role over a particular scope of resources. As I’ve covered previously, role assignments are an Azure resource that have defined sets of properties. The properties we care about for the scope of this discussion are the conditionVersion and condition properties. The conditionVersion property will always have a value of 2.0 for now. The condition property is where we work our ABAC magic.

The condition property is made up of a series of conditions which each consist of an action and one or more expressions. The logic for conditions is kinda weird, so I’m walk you through it using some of the examples from documentation as well as complex condition I throw together. First, let’s look at the general structure.

Structure of conditions used in ABAC

In the above image you can see the basic building blocks of a condition. Looks super confusing and complicated right? I know it did to me at first. Thankfully, the kind souls who write the public documentation broke this down in a more friendly programming-like way.

Far more simple explanation of conditions

In each condition property we first have the action line where the logic looks to see if the action being performed by the security principal doesn’t (note the exclamation point which negates whats in the parentheses) match the action we’re applying the conditions to. You’ll commonly see a line like:

!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND !SubOperationMatches{'Blob.List'})

This line is saying if the action isn’t blobs/read (which would be data plane call to read the contents of the blob) then the line should evaluate to true. If it evaluates to true, then the access is allowed and the expressions are not evaluated any further.

After this line we have the expression which is only evaluated when the first line evaluates to false (which in the example I just covered would mean the security principal is trying to read the content of a blob). The expressions support four categories of what Microsoft refers to as condition features. There are currently four features in various states of GA (general availability) and preview (refer to the documentation for those details). These four categories include:

  • Requests
  • Environment
  • Resource
  • Principal (security principal)

These four categories give you a ton of flexibility. Requests covers the details of the request to storage, for example such as limiting a user to specific blob prefixes based on the prefix within the request. Environment can be used to limit the user to accessing the resource from a specific Private Link Private Endpoint over Private Link in general (think defense-in-depth here). The resource feature exposes properties of the resource being accessed, which I find the most flexible thing to be blob index tags. Lastly, we have security principal and this is where you can muck around with custom security attributes in Entra ID (very cool feature if you haven’t touched it).

In a given condition we can have multiple expressions and within the condition property we can string together multiple conditions with AND and OR logic. I’m a big believer in going big or going home, so let’s take a look at a complex condition.

Diving into the Deep End

Let’s say I have a whole bunch of data I need to make available via a blobs in an Azure Storage Account. I have a strict requirement to use a single storage account and the blobs I’m going to store have different data classifications denoted by a blob index tag key named access_level. Blobs without this key are accessible by everyone while blobs classified high, medium, or low are only accessible by users with approval for the same or higher access levels (example: user with high access level can access high, medium, low, and data with no access level). Lastly, I have a requirement that data at the high access level can only be accessed during business hours.

I use a custom security attribute in Entra ID called accesslevel under an attribute set named organization to denote a user’s approved access level.

Here is how that policy would break down.

My first condition is built to allow users to read any blobs that don’t have the access_level tag.

# Condition that allows users within scope of the assignment access to documents that do not have an access level tag
(
  (
    # If the action being performed doesn't match blobs/read then result in true and allow access
    !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND !SubOperationMatches{'Blob.List'})
  )
  OR 
  (
    # If the blob doesn't have a blob index tag with a key of access_level then allow access
    NOT @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&] ForAnyOfAnyValues:StringEquals {'access_level'}
  )
)

If the blob does have an access tag, I want to start incorporating my logic. The next condition I include allows users with the accesslevel security attribute set to high to read blobs with a blob index tag of access_level equal to low or medium. I also also allow them to read blobs tagged with high if it’s between 9AM and 5PM EST.

# Condition that allows users within scope of the assignment to access medium and low tagged data if they have a custom 
# security attribute of accesslevel set to high. High data can also be read within working hours
OR
(
 (
   # If the action being performed doesn't match blobs/read then result in true and allow access
   !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND !SubOperationMatches{'Blob.List'})
 )
 OR 
 (
   # If the blob has an index tag of access_level with a value of medium or low allow the user access if they have a custom security
   # attribute of organization_accesslevel set to high
   @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:access_level<$key_case_sensitive$>] ForAnyOfAnyValues:StringEquals {'medium', 'low'}
   AND
   @Principal[Microsoft.Directory/CustomSecurityAttributes/Id:organization_accesslevel] StringEquals 'high'
 )
 OR
 (
   # If the blob has an index tag of access_level with a value of high allow the user access if they have a custom security
   # attribute of organization_accesslevel set to high and it's within working hours
   @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:access_level<$key_case_sensitive$>] ForAnyOfAnyValues:StringEquals {'high'}
   AND
   @Principal[Microsoft.Directory/CustomSecurityAttributes/Id:organization_accesslevel] StringEquals 'high'
   AND
   @Environment[UtcNow] DateTimeGreaterThan '2025-06-09T12:00:00.0Z'
   AND
   @Environment[UtcNow] DateTimeLessThan '2045-06-09T21:00:00.0Z'
 )
)

Next up is users with medium access level. These users are granted access to data tagged medium or low.

# Condition that allows users within scope of the assignment to access medium and low tagged data if they have a custom 
# security attribute of accesslevel set to medium
OR
(
  (
    # If the action being performed doesn't match blobs/read then result in true and allow access
    !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND !SubOperationMatches{'Blob.List'})
  )
  OR 
  (
    # If the blob has an index tag of access_level with a value of medium or low allow the user access if they have a custom security
    # attribute of organization_accesslevel set to medium
    @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:access_level<$key_case_sensitive$>] ForAnyOfAnyValues:StringEquals {'medium', 'low'}
    AND
    @Principal[Microsoft.Directory/CustomSecurityAttributes/Id:organization_accesslevel] StringEquals 'medium'
 )
)

Finally, I allow users with low access level to access data tagged as low.

# Condition that allows users within scope of the assignment to access low tagged data if they have a custom 
# security attribute of accesslevel set to low
OR
(
 (
   # If the action being performed doesn't match blobs/read then result in true and allow access
   !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND !SubOperationMatches{'Blob.List'})
 )
 OR 
 (
   # If the blob has an index tag of access_level with a value of low allow the user access if they have a custom security
   # attribute of organization_accesslevel set to low
   @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:access_level<$key_case_sensitive$>] ForAnyOfAnyValues:StringEquals {'low'}
   AND
   @Principal[Microsoft.Directory/CustomSecurityAttributes/Id:organization_accesslevel] StringEquals 'low'
 )
)

Notice how I separated each condition using OR. If the first condition resolves to false, then the next condition is evaluated until all access is granted or all conditions are exhausted. Neat right?

Summing it up

So why should you care about this if its use case is so limited? Well, you should care because that is ABAC’s use case today, and it would be expanded in the future. Furthermore, ABAC allows you to be more granular in how you grant access to data in Azure Storage (again, blob or queue only). You likely have use cases where this can provide another layer of security to further constrain a security principal’s access. You’ll also see these conditions used in Microsoft’s products such as AI Foundry.

The other reason it’s helpful to understand this language used for the condition, is conditions are expanding into other services such as Azure RBAC Delegation (which if you aren’t using you should be). While the language can be complex, it does make sense if you muck around with it a bit.

A final bit of guidance here, don’t try to write conditions by hand. Use the visual builder in the Azure Portal as seen below. It will help you get some basic conditions in place that you can further modify directly via the code view.

Azure Portal Condition Builder

Next time you’re locking down an Azure storage account, think about whether or not you can further restrict humans and non-humans alike based on the attributes discussed today. The main places I’ve seen this used are for user profiles, further restricting user access to specific subsets of data (similar to the one I walked through above), or even adding an additional layer of network security baked directly into the role assignment itself.

See you next post!

AI Foundry – Identity, Authentication and Authorization

This is a part of my series on AI Foundry:

Updates:

  • 3/17/2025 – Updated diagrams to include new identities and RBAC roles that are recommended as a minimum

Yes, I’m going to re-use the outline from my Azure OpenAI series. You wanna fight about it? This means we’re going to now talk about one of the most important (as per usual) and complicated (oh so complicated) topic in AI Foundry: identity, authentication, and authorization. If you haven’t read my prior two posts, you should take a few minutes and read through them. They’ll give you the baseline you’ll need to get the most out of this post. So put on your coffee, break out the metal clips to keep your eyes open Clockwork Orange-style, and prepare for a dip into the many ways identity, authN, and authZ are handled within the service.

As I covered in my first post Foundry is made up of a ton of different services. Each of these services plays a part in features within Foundry, some may support multiple forms of authentication, and most will be accessed by the many types of identities used within the product. Understanding how each identity is used will be critical in getting authorization right. Missing Azure RBAC role assignments is the number one most common misconfiguration (right above networking, which is also complicated as we’ll see in a future post).

Azure AI Foundry Components

Let’s start first with identity. There will generally be four types of identities used in AI Foundry. These identities will be a combination of human identities and non-human identities. Your humans will be your AI Engineers, developers, and central IT and will use their Entra ID user identities. Your non-humans will include the AI Foundry hub, project, and compute you provision for different purposes. In general, identities are used in the following way (this is not inclusive of all things, just the ones I’ve noticed):

  • Humans
    • Entra ID Users
      • Actions within Azure Portal
      • Actions within AI Foundry Studio
        • Running a prompt flow from the GUI
        • Using the Chat Playground to send prompts to an LLM
        • Running the Chat-With-Your-Data workflow within the Chat Playground
        • Creating a new project within a hub
      • Actions using Azure CLI such as sending an inference to a managed online endpoint that supports Entra ID authentication
  • Non-Humans
    • AI Foundry Hub Managed Identity
      • Accessing the Azure Key Vault associated with the Foundry instance to create secrets or pull secrets when AI Foundry connections are created using credentials versus Entra ID
      • Modify properties of the default Azure Storage Account such as setting CORS policies
      • Creating managed private endpoints for hub resources if a managed virtual network is used
    • AI Foundry Project Managed Identity
      • Accessing the Azure Key Vault associated with the Foundry instance to create secrets or pull secrets when AI Foundry connections are created using credentials versus Entra ID
      • Creating blob containers for project where project artifacts such as logs and metrics are stored
      • Creating file share for project where project artifacts such as user-created Prompt Flow files are stored
    • Compute
      • Pulling container image from Azure Container Registry when deploying prompt flows that require custom environments
      • Accessing default storage account project blob container to pull data needed to boot
      • Much much more in this category. Really depends on what you’re doing

Alright, so you understand the identities that will be used and you have a general idea of how they’ll be used to perform different tasks within the Foundry ecosystem. Let’s now talk authentication.

The many identities of AI Foundry

Authentication in Foundry isn’t too complicated (in comparison to identity and authorization). Authenticating to the Azure Portal and the Foundry Studio is always going to be Entra ID-based authentication. Authentication to other Azure resources from the Foundry is where it can get interesting. As I covered in my prior post, Foundry will typically support two methods of authentication: Entra ID and API key based (or credentials as you’ll see it often referred to as in Foundry). If at all possible, you’ll want to lean into Entra ID-based authentication whenever you access a resource from Foundry. As we’ll see in the next section around authorization, this will have benefits. Besides authorization, you’ll also get auditability because the logs will show the actual security principal that accessed the resource.

If you opt to use credential-based authentication for your connections to Azure resources, you’ll lose out in a few different areas. When credential-based authentication is used, users will access connected resources within Foundry using the keys stored in the Foundry connection object. This means the user assumes whatever permissions the key has (which is typically all data-plane permissions but could be more restrictive in instances like a SAS token). Besides the authorization challenges, you’ll also lose out on traceability. AI Foundry (and the underlining Azure Machine Learning) has some authorization (via Azure RBAC roles) that is used to control access to connections, but very little in the way auditing who exercised what connection when. For these reasons, you should use Entra ID where possible.

Ready for authorization? Nah, not yet. Before we get into authorization, it’s important to understand that these identities can be used in generally two ways: direct or indirect (on-behalf-of). For example, let’s say you run a Prompt Flow from AI Foundry interface, while the code runs on a serverless compute provisioned in a Microsoft managed network (more on that in a future post), the identity context it uses to access downstream resources is actually yours. Now if you deploy that same prompt to a managed online-endpoint, the code will run on that endpoint and use the managed identity assigned to the compute instance. Not so simple is it?

So how do you know which identity will be used? Observe my general guidance from up above. If you’re running things from the GUI, likely your identity, if you’re deploying stuff to compute likely the identity associated with the compute. The are exceptions to the rule. For example, when you attempt to upload data for fine-tuning or using the on-your-own-data feature in the Chat Playground, and your default storage account is behind a private endpoint your identity will be used to access the data, but the managed identity associated with the project is used to access the private endpoint resource. Why it needs access to the Private Endpoint? I got no idea, it just does. If you don’t, good luck to you poor soul because you’re going to have hell of time troubleshooting it.

Another interesting deviation is when using the Chat Playground Chat With Your Data feature. If you opt to add your data and build the index directly within AI Foundry, there will be a mixed usage of the user identity, AI Search managed identity (which communicates with the embedding models deployed in the AI Services or Azure OpenAI instance to create the vector representations of the chunks in the index), and AI Services or Azure OpenAI managed identity (creates index and data sources in AI Search). It can get very complex.

The image below represents most of the flows you’ll come across.

The many AI Foundry authentication flows and identity patterns

Okay, now authorization? Yes, authorization. I’m not one for bullshitting, so I’ll just tell you up front authorization in Foundry can be hard. It gets even harder when you lock down networking because often the error messages you will receive are the same for blocked traffic and failed authorization. The complexities of authorization is exactly why I spent so much time explaining identity and authentication to you. I wish I could tell you every permission in every scenario, but it would take many, many, posts to do that. Instead, I’d advise you to do (sometimes I fail to do this) which is RTFM (go ahead and Google that). This particular product group has made strong efforts to document required permissions, so your first stop should always be the Foundry public documentation. In some instances, you will also need to access the Azure Machine Learning documentation (again, this is built on top of AML) because there are sometimes assumptions that you’ll do just that because you should know this is a feature its inheriting from AML (yeah, not fair but it’s reality).

In general, at an absolute minimum, the permissions assigned to the identities below will get you started as of the date of this post (updated 3/17/2025).

As I covered in my prior posts, the AI Foundry Hub can use either a system-assigned or user-assigned managed identity. You won’t hear me say this often, but just use the system-assigned managed identity if you can for the hub. The required permissions will be automatically assigned and it will be one less thing for you to worry about. Using the permissions listed above should work for a user-assigned managed identity as well (this is on my backburner to re-validate).

A project will always use a system assigned managed identity. The only permission listed above that you’ll need to manually grant is Reader over the Private Endpoint for the default storage account. This is only required if you’re using private endpoint for your default storage account. There may be additional permissions required by the project depending on the activities you are performing and data you are accessing.

On the user side the permissions above will put you in a good place for your typical developer or AI engineer to use most of the features within Foundry. If you’re interacting with other resources (such as an AI Search Index when using the on-your-own-data feature) you’ll need to ensure the user is granted appropriate permissions to those resources as well (typically Search Service Contributor – management plane to list indexes and create indexes and Search Index Data Contributor – data plane create and view records within an index. If your user is fine-tuning a model that is deployed within the Azure OpenAI or AI Service instance, they may additionally need the Azure OpenAI Service Contributor role (to upload the file via Foundry for fine-tuning). Yeah, lots of scenarios and lots of varying permissions for the user, but that covers the most common ones I’ve run into.

Lastly, we have the compute identities. There is no standard here. If you’ve deployed a prompt flow to a managed identity, the compute will need the permissions to connect to the resources behind the connections (again assuming Entra-ID is configured for the connection, if using credential Azure Machine Learning Workspace Secrets Reader on the project is likely sufficient). Using a prompt flow that requires a custom environment may require an image be pushed to the Azure Container Registry which the compute will pull so it will need the Acr Pull RBAC role assignment on the registry.

Complicated right? What happens when stuff doesn’t work? Well, in that scenario you need to look at the logs (both Azure Activity Log and diagnostic logging for the relevant service such as blob, Search, OpenAI and the like). That will tell you what the user is failing to do (again, only if you’re using Entra ID for your connections) and help you figure out what needs to be added from a permissions perspective. If you’re using credentials for your connections, the most common issue with them is with the default storage account where the storage account has had the storage access keys disabled.

Here are the key things I want you to take away from this:

  1. Know the identity being used. If you don’t know which identity is being used, you’ll never get authorization right. Use the of the downstream service logs if you’re unsure. Remember, management plane stuff in Azure Activity Log and data plane stuff in diagnostic logs.
  2. Use Entra ID authentication where possible. Yeah it will make your Azure RBAC a bit more challenging, but you can scope the access AND understand who the hell is doing what.
  3. RTFM where possible. Most of this is buried in the public documentation (sometimes you need to put on your Indiana Jones hat). Remember that if you don’t find it in Foundry documentation, look to Azure Machine Learning.
  4. Use the above information as general guide to get the basic environment setup. You’ll build from that basic foundation.

Alrighty folks, your eyes are likely heavy. I hope this helps a few souls out there who are struggling with getting this product up and running. If you know me, you know I’m no fan boy, but this particular product is pretty damn awesome to get us non-devs immediately getting value from generative AI. It may take some effort to get this product running, but it’s worth it!

Thanks and see you next post!

Azure Authorization – Azure RBAC Delegation

This is part of my series on Azure Authorization.

  1. Azure Authorization – The Basics
  2. Azure Authorization – Azure RBAC Basics
  3. Azure Authorization – actions and notActions
  4. Azure Authorization – Resource Locks and Azure Policy denyActions
  5. Azure Authorization – Azure RBAC Delegation
  6. Azure Authorization – Azure ABAC (Attribute-based Access Control)

Hello again fellow geeks!

I typically avoid doing multiple posts in a month for sanity purposes, but quite possibly one of the most awesome Azure RBAC features has gone generally available under the radar. Azure RBAC Delegation is going to be one of those features that after reading this post, you will immediately go and implement. For those of you coming from AWS, this is going to be your IAM Permissions Boundary-like feature. It addresses one of the major risks to Azure’s authorization model and fills a big gap the platform has had in the authorization space.

Alright, enough hype let’s get to it.

Before I dive into the new feature, I encourage you to read through the prior posts in my Azure Authorization series. These posts will help you better understand how this feature fits into the bigger picture.

As I covered in my Azure RBAC Basics post, only security principals with sufficient permissions in the Microsoft.Authorization resourced provider can create new Azure RBAC Role Assignments. By default, once a security principal is granted that permission it can then assign any Azure RBAC Role to itself or any other security principal within the Entra ID tenant to it’s immediate scope of access and all child scopes due to Azure RBAC’s inheritance model (tenant root -> management group -> subscription -> resource group -> resource). This means a human assigned a role with sufficient permissions (such as an IAM support team) could accidentally / maliciously assign another privileged role to themselves or someone else and wreak havoc. Makes for a not good late night for anyone on-call.

While the human risk exists, the greater risk is with non-human identities. When an organization passes beyond the click-ops and imperative (az cli, PowerShell, etc) stage and moves on to the declarative stage with IaC (infrastructure-as-code), delivery of those IaC templates (ARM, Bicep, Terraform) are put through a CI/CD (continuous integration / continuous delivery) pipeline. To deploy the code to the cloud platform, these pipeline’s compute need an identity to authenticate to the cloud management plane. In Azure, this is accomplished through a service principal or managed identity. That identity must be granted the specific permissions it needs which is done through Azure RBAC Role assignments.

In the ideal world, as much as can be is put through the pipeline, including role assignments. This means the pipeline needs to be able to create Azure RBAC Role assignments which means it needs permissions for the Microsoft.Authorization resource provider (or relevant built-in roles with Owner being common).

To mitigate the risk of one giant blast radius with one pipeline, organizations will often create multiple pipelines with separate pipelines for production and non-production, pipelines for platform components (networking, logging, etc) and others for the workload (possibly one for workload components such as an Event Hub and another separate pipeline for code). Pipelines will be given separate security principals with permissions at different scopes with Central IT typically owning pipelines at higher scopes (management groups) and business units owning pipelines at lower scopes (subscription or resource group).

Example of multiple pipelines and identities

At the end of the day you end up with lots of pipelines and lots of non-humans that hold the Owner role at a given scope. This multiplies the risk of any one of those pipeline identities being misused to grant someone or something permissions beyond what it needs. Organizations typically mitigate through this automated and manual gates which can get incredibly complex at scale.

This is where Azure RBAC Delegation really shines. It allows you to wrap restrictions around how a security principal can exercise its Microsoft.Authorization permissions. These restrictions can include:

  • Restricting to a specific set of Azure RBAC Roles
  • Restricting to a specific security principal type (user, service principal, group)
  • Restricting whether it can create new assignments, update existing assignments, or delete assignments
  • Restricting it to a specific set of security principals (specific set of groups, users, etc)

So how does it do it? Well if you read my prior post, you’ll remember I mentioned the new property included in Azure RBAC Role assignments called conditions. RBAC Delegation uses this new property to wrap those restrictions around the role. Let’s look at an example role using one of the new built-in role Microsoft has introduced called the Role Based Access Control Administrator.

Let’s take a look at the role definition first.

{
    "id": "/subscriptions/b3b7aae7-c6c1-4b3d-bf0f-5cd4ca6b190b/providers/Microsoft.Authorization/roleDefinitions/f58310d9-a9f6-439a-9e8d-f62e7b41a168",
    "properties": {
        "roleName": "Role Based Access Control Administrator",
        "description": "Manage access to Azure resources by assigning roles using Azure RBAC. This role does not allow you to manage access using other ways, such as Azure Policy.",
        "assignableScopes": [
            "/"
        ],
        "permissions": [
            {
                "actions": [
                    "Microsoft.Authorization/roleAssignments/write",
                    "Microsoft.Authorization/roleAssignments/delete",
                    "*/read",
                    "Microsoft.Support/*"
                ],
                "notActions": [],
                "dataActions": [],
                "notDataActions": []
            }
        ]
    }
}

In the above role definition, you can see that the role has been granted only permissions necessary to create, update, and delete role assignments. This is more restrictive than an Owner or User Access Administrator which have a broader set of permissions in the Microsoft.Authorization resource provider. It makes for a good candidate for a business unit pipeline role versus Owner since business units shouldn’t need to be managing Azure Policy or Azure RBAC Role Definitions. That responsibility should within Central IT’s scope.

You do not have to use this built-in role or can certainly design your own. For example, the role above does not include the permissions to manage a resource lock and this might be something you want the business unit to be able to manage. This feature is also supported for custom role. In the example below, I’ve cloned the Role Based Access Control Administrator but added additional permissions to manage resource locks.

{
    "id": "/subscriptions/b3b7aae7-c6c1-4b3d-bf0f-5cd4ca6b190b/providers/Microsoft.Authorization/roleDefinitions/dd681d1a-8358-4080-8a37-9ea46c90295c",
    "properties": {
        "roleName": "Privileged Test Role",
        "description": "This is a test role to demonstrate RBAC delegation",
        "assignableScopes": [
            "/subscriptions/b3b7aae7-c6c1-4b3d-bf0f-5cd4ca6b190b"
        ],
        "permissions": [
            {
                "actions": [
                    "Microsoft.Authorization/roleAssignments/write",
                    "Microsoft.Authorization/roleAssignments/delete",
                    "*/read",
                    "Microsoft.Support/*",
                    "Microsoft.Authorization/locks/read",
                    "Microsoft.Authorization/locks/write",
                    "Microsoft.Authorization/locks/delete"
                ],
                "notActions": [],
                "dataActions": [],
                "notDataActions": []
            }
        ]
    }
}

If I were to attempt to create a new role assignment for this custom role, I’ve given the ability to associate the role with a set of conditions.

Adding conditions to a custom role

Let’s take an example where I want a security principal to be able to create new role assignments but only for the built-in role of Virtual Machine Contributor. The condition in my role would like the below:

    (
        (
            !(ActionMatches{'Microsoft.Authorization/roleAssignments/write'})
        )
        OR 
        (
            @Request[Microsoft.Authorization/roleAssignments:RoleDefinitionId] ForAnyOfAnyValues:GuidEquals {9980e02c-c2be-4d73-94e8-173b1dc7cf3c}
            AND
            @Request[Microsoft.Authorization/roleAssignments:PrincipalType] StringEqualsIgnoreCase 'User'
        )
    )
    AND
    (
        (
            !(ActionMatches{'Microsoft.Authorization/roleAssignments/delete'})
        )
        OR 
        (
            @Request[Microsoft.Authorization/roleAssignments:PrincipalType] StringEqualsIgnoreCase 'User'
        )
    )

Yes, I know the conditional language is ugly as hell. Thankfully, you won’t have to write this yourself which I will demonstrate in a few. First, I want to walk you through the conditional language.

Azure RBAC conditional language

When using RBAC Delegation, you can associate one or more conditions to an Azure RBAC role assignment. Like most conditional logic in programming, you can combine conditions with AND and OR. Within each condition you have an action and expression. When the condition is evaluated the action is first checked for a match. In the example above I have !(ActionMatches{‘Microsoft.Authorization/roleAssignments/write’}) which will match any permission that isn’t write which will cause the condition to result in true and will allow the access without evaluating the expression below. If the action is write, then this evaluated to false and the expression is then evaluated. In the example above I have two expressions. The first expression checks whether the request I’m making to create a role assignment is for the role definition id for the Virtual Machine Contributor. The second expression checks to see if the principal type in the role assignment is of type user. If either of these evaluate to false, then access is denied. If it evaluates to true, it moves on to the next condition which limits the security principal to deleting role assignments assigned to users.

No, you do not need to learn this syntax. Microsoft has been kind enough to provide a GUI-based conditions tool which you can use to build your conditions and view as code you can include in your templates.

GUI-based condition builder

Pretty cool right? The public documentation walks through a number of different scenarios where you can use this, so I’d encourage you to read it to spur ideas beyond the pipeline example I have given in this post. However, the real value out of this feature is stricter control of what how those pipeline identities can affect authorization.

So what are you takeaways for this post?

  • Get this feature implemented YESTERDAY. This is minimal overhead with massive security return.
  • Use the GUI-based condition builder to build your conditions and then copy the JSON into your code.
  • Take some time to learn the conditional syntax. It’s used in other places in Azure RBAC and will likely continue to grow in usage.
  • Start off using the built-in Role Based Access Control Administrator role. If your business units need more than what is in there (such as managing resource locks) clone it and add those permissions.

Well folks, I hope you got some value out of this post. Add a to-do to get this in place in your Azure estate as soon as possible!

Azure Authorization – actions and notActions

This is part of my series on Azure Authorization.

  1. Azure Authorization – The Basics
  2. Azure Authorization – Azure RBAC Basics
  3. Azure Authorization – actions and notActions
  4. Azure Authorization – Resource Locks and Azure Policy denyActions
  5. Azure Authorization – Azure RBAC Delegation
  6. Azure Authorization – Azure ABAC (Attribute-based Access Control)

Hello again! This post will be a continuation of my series on Azure authorization where I’ll be covering how actions and notActions (and dataActions and notDataActions) work together. I will cover what they do and, more importantly, what they don’t do. If there is one place I see customers make mistakes when creating custom Azure RBAC roles, it’s with the use of notActions (and notDataActions).

In my last post I explained the structure of an Azure RBAC Role Definition. As a quick refresher, each role definition has a permissions section which consists of subsections named actions, notActions, dataActions, and notDataActions. Actions are the management plane permissions (operations on the resource) the role is granted and dataActions are the data plane actions (operations on the data stored within the resource) the role is granted.

Sample Azure RBAC role definition for a built-in role

So what about notActions and notDataActions? The notActions section subtracts permissions from the actions section while the notDataActions subtracts permissions from the notDataActions. Notice I used the word subtract and not deny. notActions and notDataActions IS NOT an explicit deny but rather a way to subtract a specific or permission or set of permissions. If a user receives a permissions from another Azure RBAC role assignment for another role which grants the permission, the user would have that permission. Azure’s authorization engine evaluates all of the permissions a user has across all Azure RBAC role assignments relevant to the resource the user is trying to perform an operation on. We’ll see this in more detail in a later post. Confusing right? Let’s explore a use case and do a compare and contrast to an AWS IAM Policy to drive this point home.

Example AWS IAM Role and Azure RBAC Role Definition

In the AWS IAM Policy above (on the left side) I’ve authored a custom IAM Policy that includes an allow rule allowing all actions on Amazon CloudWatch Logs (AWS’s parallel to a Log Analytics Workspace). The policy also includes a deny rule which explicitly denies the delete action. Finally, it contains a condition that the CloudWatch Logs instance must be tagged with a tag of environment with value of production. Any IAM Users, Groups, or Roles assigned the policy will have the ability to perform all operations on a CloudWatch Logs instance EXCEPT for the delete action if the CloudWatch Logs instance is tagged with environment=production. This is a very common way to design AWS IAM Policy where you grant all permissions then strip out risky or destructive permissions.

What if you wanted to do something similar in Azure? You might design an Azure RBAC role definition like what is pictured on the right. Here the actions section is granting all permissions on the Log Analytics resource provider and the notActions section is subtracting purge and delete. Notice that while Azure RBAC definition support conditions as well, but today they cannot be used in the same way they are used in AWS and you’d instead need to use another Azure feature such as a resource lock or Azure Policy denyAction. I’ll cover those features and the capabilities of Azure RBAC definition and assignment conditions future posts in this series. For the purposes of this post, let’s ignore that condition. Would an assignment of this role definition prevent a user from deleting a Log Analytics Workspace? Let’s try it out and see.

Here we have a user name Carl Carlson. An Azure RBAC role assignment has been created at the resource group scope for the role definition described below. This role definition includes all permissions for a log analytics workspace but removes the delete action.

{
    "id": "/subscriptions/XXXXXXXX-c6c1-4b3d-bf0f-5cd4ca6b190b/providers/Microsoft.Authorization/roleDefinitions/a21541c6-401d-48b7-9149-7c3de8db2adc",
    "properties": {
        "roleName": "Custom - notActions Demo - Remove action",
        "description": "Custom role that demonstrates notActions. This role uses a notAction to remove the delete action from a Log Analytics Workspace.",
        "assignableScopes": [
            "/subscriptions/XXXXXXXX-c6c1-4b3d-bf0f-5cd4ca6b190b"
        ],
        "permissions": [
            {
                "actions": [
                    "Microsoft.OperationalInsights/*"
                ],
                "notActions": [
                    "Microsoft.OperationalInsights/workspaces/delete"
                ],
                "dataActions": [],
                "notDataActions": []
            }
        ]
    }
}

When Carl attempts to delete the Log Analytics Workspace he receives an error stating that he does not have authorization to perform this action. This demonstrates that the delete action was indeed removed from the actions. Let me now drive home why this is not a deny.

Unauthorized error message

I’ll now create an additional Azure RBAC role assignment at the resource group scope for the custom role described in the definition below. This role definition includes the delete action that the other role definition subtracted.

{
    "id": "/subscriptions/b3b7aae7-c6c1-4b3d-bf0f-5cd4ca6b190b/providers/Microsoft.Authorization/roleDefinitions/bed940de-a64b-4601-bd47-651182f9f3e1",
    "properties": {
        "roleName": "Custom - notActions Demo - Add Action",
        "description": "Custom role that demonstrates notActions. This role grants the delete action from a Log Analytics Workspace.",
        "assignableScopes": [
            "/subscriptions/b3b7aae7-c6c1-4b3d-bf0f-5cd4ca6b190b"
        ],
        "permissions": [
            {
                "actions": [
                    "Microsoft.OperationalInsights/workspaces/delete"
                ],
                "notActions": [],
                "dataActions": [],
                "notDataActions": []
            }
        ]
    }
}

This time when Carl attempts to delete the Log Analytics Workspace he is successful. This is because the action subtracted by the first role has been added back in by this second role because the Azure RBAC authorization engine looks at all permissions cumulatively.

Successful deletion of resource due to cumulative permissions

So what do I want you to walk away with from this post? I want you to walk away with an understanding that notActions and notDataActions ARE NOT explicit denies and they are simply subtracting a permission(s) from actions or dataActions. As of today, you cannot design your roles the same way you did it in AWS (if you’re coming from there).

The next question bubbling in your mind is what tools does Microsoft provide to address this behavior of notActions and notDataActions. The answer to that is there are a lot of them. These include things like resource locks, Azure Policy denyActions, and denyAssignments. Each of these tools has benefits and considerations to their usage. Thankfully for you, I’ll be including a post on each of these features and some of the more advanced features working their way to general availability.

See you next post!