6/7/2023 Update – Added clarification that this pattern only works with Azure AD authentication to the AOAI instance.
Another Azure OpenAI Service post? Why not? I gotta justify the costs of maintaining the blog somehow!
The demand for the AOAI (Azure OpenAI Service) is absolutely insane. I don’t think I can compare the customer excitement over the service to any other service I’ve seen launch during my time working at cloud providers. With that demand comes the challenge to the cloud service provider of ensuring there is availability of the service for all the customers that want it. In order to do that, Microsoft has placed limits on the number of tokens/minute and requests/minute that can be made to a specific AOAI instance. Many customers are hitting these limits when moving into production. While there is a path for the customer to get the limits raised by putting in a request to their Microsoft account team, this process can take time and there is no guarantee the request can or will be fulfilled.
What can customers do to work around this problem? You need to spin up more AOAI instances. At the time of this writing you can create 3 instances per region per subscription. Creating more instances introduces the new problem distributing traffic across those AOAI instances. There are a few ways you could do this including having the developer code the logic into their application (yuck) ore providing the developer a singular endpoint which is doing the load balancing behind the scenes. The latter solution is where you want to live. Thankfully, this can be done really easily with a piece of Azure infrastructure you are likely already using with AOAI. That piece of infrastructure is APIM (Azure API Management).
Sample AOAI Architecture
As I’ve covered in my posts on AOAI and APIM and my granular chargebacks in AOAI, APIM provides a ton of value the AOIA pattern by providing a gate between the application and the AOAI instance to inspect and action on the request and response. It can be used to enforced Azure AD authentication, provide enhanced security logging, and capture information needed for internal chargebacks. Each of these enhancements is provided through APIM’s custom policy language.
APIM and AOAI Flow
By placing APIM into the mix and using a simple APIM policy we can introduce basic round robin load balancing. Let’s take a deeper look at this policy
<!-- This policy randomly routes (load balances) to one of the two backends -->
<!-- Backend URLs are assumed to be stored in backend-url-1 and backend-url-2 named values (fka properties), but can be provided inline as well -->
<policies>
<inbound>
<base />
<set-variable name="urlId" value="@(new Random(context.RequestId.GetHashCode()).Next(1, 3))" />
<choose>
<when condition="@(context.Variables.GetValueOrDefault<int>("urlId") == 1)">
<set-backend-service base-url="{{backend-url-1}}" />
</when>
<when condition="@(context.Variables.GetValueOrDefault<int>("urlId") == 2)">
<set-backend-service base-url="{{backend-url-2}}" />
</when>
<otherwise>
<!-- Should never happen, but you never know ;) -->
<return-response>
<set-status code="500" reason="InternalServerError" />
<set-header name="Microsoft-Azure-Api-Management-Correlation-Id" exists-action="override">
<value>@{return Guid.NewGuid().ToString();}</value>
</set-header>
<set-body>A gateway-related error occurred while processing the request.</set-body>
</return-response>
</otherwise>
</choose>
</inbound>
<backend>
<base />
</backend>
<outbound>
<base />
</outbound>
<on-error>
<base />
</on-error>
</policies>
In the policy above a random number is generated that is greater than or equal to 1 and less than 3 using the Next method. The application’s request is sent along to one of the two backends based upon that number. You could add additional backends by upping the max value in the Next method and adding another when condition. Pretty awesome right?
Before you ask, no you do not need a health probe to monitor a cloud service provider managed service. Please don’t make your life difficult by introducing an Application Gateway behind the APIM instance in front of the AOAI instance because Application Gateway allows for health probes and more complex load balancing. All you’re doing is paying Microsoft more money, making your operations’ team life miserable, and adding more latency. Ensuring the service is available and health is on the cloud service provider, not you.
But Matt, what about taking an AOAI instance out of the pool if it beings throttling traffic? Again, no you do not need to this. Eventually this APIM as a simple load balancer pattern will not necessary when the AOAI service is more mature. When that happens, your applications consuming the service will need to be built to handle throttling. Developers are familiar with handling throttling in their application code. Make that their responsibility.
Well folks, that’s this short and sweet post. Let’s summarize what we learned:
This pattern requires Azure AD authentication to AOAI. API keys will not work because each AOAI instance has different API keys.
You may hit the requests/minute and tokens/minute limits of an AOAI instance.
You can request higher limits but the request takes time to be approved.
You can create multiple instances of AOAI to get around the limits within a specific instance.
APIM can provide simple round-robin load balancing across multiple instances of AOAI.
You DO NOT need anything more complicated than round-robin load balancing. This is a temporary solution that you will eventually phase out. Don’t make it more complicated than it needs to be.
DO NOT introduce an Application Gateway behind APIM unless you like paying Microsoft more money.
The fun with the new Azure OpenAI Service continues! I’ve been lucky enough to have been tapped to help a number of Microsoft financial services customers with getting the Azure OpenAI Service in place with the appropriate infrastructure and security controls. In the process, I get to learn from a ton of smart people in the AI space. It’s truly been one of the highlights of my 20-year career.
Over the past few weeks I’ve been posting about what I’ve learned, and today I’m going to continue with that. In my first post on the service I gave a high level overview of the security controls Microsoft makes available to the customer to secure their instance of Azure OpenAI Service. In my second post I dove deep into how the service handles authentication and how Azure Active Directory (Azure AD) can be used to improve over the built-in API key-based authentication. Today I’m going to cover authorization and demonstrate how using Azure AD authentication lets you take advantage of granular authorization with Azure RBAC.
Let’s dig in!
As I covered in my last post, the Azure OpenAI Service has both a management plane and data plane. Each plane supports different types of authentication (process of verifying the identity of a user, process, or device, often as a prerequisite to allowing access to resources in an information system) and authorization (The right or a permission that is granted to a system entity to access a system resource). Operations such as swapping to a customer-managed key, enabling a private endpoint, or assigning a managed identity to the service occur within the management plane. Activities such as uploading training data or issuing a prompt to a model occur at the data plane. Each plane uses a different API endpoint. The image below will help you visualize the different planes.
Azure OpenAI Service Management and Data Planes
As illustrated above, authorization within the management plane is handled using Azure RBAC because authentication to that plane requires Azure AD-based authentication. Here we can limit the operations occurring at the management plane a security principal (user, service principal, managed identity, Azure Active Directory group (local or synchronized from on-premises) can perform by using Azure RBAC. For those of you coming from the AWS world, and where the Azure OpenAI Service may be your first venture into Azure, Azure RBAC is Azure’s authorization solution. It’s similar to an AWS IAM Policy. Let’s take a look at a built-in RBAC role that a customer might grant a data scientist who will be using the Azure OpenAI Service.
Let’s briefly walkthrough each property. The id property is the unique resource name assigned to this role definition. Next up we have the name property and description properties which need no explanations. The assignableScopes property determines at which scope an RBAC role can be assigned. Typical scopes include management groups, subscriptions, resource groups, and resources. Built-in roles will always have an assignable scope of “/” which denotes the RBAC role can be assigned to any management group, subscription, resource group, or role.
I’ll spend a bit of time on the permissions property. The permissions property contains a few different child properties including actions, notActions, dataActions, and notDataActions. The actions property lists the management plane operations allowed by the role while the dataActions lists the data plane operations allowed by the role. The notActions and notDataActions are interesting in that they are used to strip permissions out of the actions or dataActions. For example, say you granted a user full data plane operations to an Azure Key Vault but didn’t want them to have the ability to delete keys. You could to this by giving the user the dataAction of Microsoft.KeyVaults/* and notDataAction of Microsoft.KeyVaults/keys/purge/action. Take note this is NOT an explicit deny. If the user gets this permission in another way through assignment of a different RBAC role the user will be able to perform the action. At this time, Azure does not have a generally available feature that allows for an explicit deny like AWS IAM and what does exist in preview has an extremely narrow scope such that it isn’t very useful.
When you’re ready to assign a role to a security principal (user, service principal, managed identity, Azure Active Directory group (local or synchronized from on-premises) you create what is called a role assignment. A role assignment associates an Azure RBAC Role Definition to a security principal and scope. For example, in the below image I’ve created an RBAC Role Assignment for the Cognitive Services User Role at the resource group scope for the user Carl Carlson. This grants Carl the permission to perform the operations listed in the role definition above to any resource within the resource group, including the Azure OpenAI Resource.
Azure RBAC Role Assignment
Scroll back and take a look at the role definition, notice any risky permission? If you noticed the permission Microsoft.CognitiveServices/accounts/listkeys/action (remember that the Azure OpenAI Service falls under the Cognitive Services umbrella), grab yourself a cookie. As I’ve covered previously, every instance of the Azure OpenAI Service comes with two API keys. These API keys allow for authentication to the instance at the data plane level, can’t be limited in what they can do, and are very difficult to ever track back to who used them. You will want to very tightly control access to those API keys so be wary of who you give this role out to and may want to instead create a similar custom role but without this permission.
The are two other roles which are specific two the Azure OpenAI Service are the Cognitive Services OpenAI Contributor and Cognitive Services OpenAI User. Let’s look at the contributor role first.
The big difference here is this role doesn’t grant much at the management plane. While this role may seem appealing to give to a data scientist because it doesn’t allow access to the API keys, it also doesn’t allow access to the instance metrics. I’ll talk about this more when I do a post on logging and monitoring in the service, but access to the metrics are important for the data scientists. These metrics allow them to see how much volume they’re doing with the service which can help them estimate costs and avoid hitting API limits.
Under the dataActions you can see this role allows all data plane operations. These operations include uploading training data for the creation of fine-tuned models. If you don’t want your users to have this access, then you can either strip the permissions Microsoft.CognitiveServices/accounts/OpenAI/files/import/action or grant the user the next role I’ll talk about.
One interesting thing to note is that while this role grants all data actions, which include data plane permissions around deployments, users with this role cannot deploy models to the instance. An error will be thrown that the user does not have the Microsoft.CognitiveServices/accounts/deployments/write permission. I’m not sure if this by design, but if anyone has a workaround for it, let me know in the comments. It would seem like if you want the user to deploy a model, you’ll need to model a custom role after this role and add that permissions.
The last role I’m going to cover is the Cognitive Services OpenAI User role. Let’s look at the permissions for this one.
Like the contributor role, this role is very limited with management plane permissions. At the data plane level, this role really allows for issuing prompts and not much else. This role is great a non-human application role assigned via service principal or managed identity. It will allow the application to issue prompts and not much else. You don’t have to worry about a user exploiting this role to access training data you may have uploaded or making any modification to the Azure OpenAI Service instance.
Well folks that wraps this up. Let’s sum up what we’ve learned:
The Azure OpenAI Service supports fine-grained authorization through Azure RBAC at both the management plane and data plane when the security principal is authenticated through Azure AD.
Avoid using API keys where possible and leverage Azure RBAC for authorization. You can make it much more fine-grained, layer in the controls provided by Azure AD on top of it, and associate the usage of the service back to user (kinda as we’ll see in my post on logging).
Tightly control access to the API keys. I’d recommend any role you give to a data scientist or an application that you strip out the listkeys permissions.
I’d recommend creating a custom role for human users modeled after the Cognitive Services User role but without the listkeys permission. This will grant the user access to the full data plane and allow access to management plane pieces such as metrics. You can optionally be granular with your dataActions and leave out the files permissions to prevent human users from uploading training data.
I’d recommend using the built-in Cognitive Services OpenAI User role for service principals and managed identities assigned to applications. It grants only the permissions these applications are likely going to need and nothing more.
I’d avoid using notActions and notDataActions since it’s not an explicit deny and it’s very difficult to determine an effective user’s access in Azure without another tool like Entra Permissions Management.
Well folks, I hope this post has helped you better understand authorization in the service and how you could potentially craft it to align with least privilege.
Updated 4/3/2023 with simpler way to authenticate with Azure AD via Python SDK
Hello again!
Days and nights have been busy diving deeper into the AI landscape. I’ve been reading a great book by Tom Taulli called Artificial Intelligence Basics: A Non-Technical Introduction. It’s been a huge help in getting down the vocabulary and understanding the background to the technology from the 1950s on. In combination with the book, I’ve been messing around a lot with Azure’s OpenAI Service and looking closely at the infrastructure and security aspects of the service.
In my last post I covered the controls available to customers to secure their specific instance of the service. I noted that authentication to the service could be accomplished using Azure Active Directory (AAD) authentication. In this post I’m going to take a deeper look at that. Be ready to put your geek hat on because this post will be getting down and dirty into the code and HTTP transactions. Let’s get to it!
Before I get into the details of how supports AAD authentication, I want to go over the concepts of management plane and data plane. Think of management plane for administration of the resource and data plane for administration of the data hosted within the resource. Many services in Azure have separate management planes and data planes. One such service is Azure Storage which just so happens to have similarities with authentication to the OpenAI Service.
When a customer creates an Azure Storage Account they do this through interaction with the management plane which is reached through the ARM API hosted behind management.azure.come endpoint. They must authenticate against AAD to get an access token to access the API. Authorization via Azure RBAC then takes place to validate the user, managed identity, or service principal has permissions on the resource. Once the storage account is created, the customer could modify the encryption key from a platform managed key (PMK aka key managed by Microsoft) to a customer managed key (CMK), enable soft delete, or enable network controls such as the storage firewall. These are all operations against the resource.
Once the customer is ready to upload blob data to the storage account, they will do this through a data plane operation. This is done through the Blob Service API. This API is hosted behind the blob.core.windows.net endpoint and operations include creation of a blob or deletion of a blob. To interact with this API the customer has two means of authentication. The first method is the older method of the two and involves the use of static keys called storage account access keys. Every storage account gets two of these keys when a storage account is provisioned. Used directly, these keys grant full access to all operations and all data hosted within the storage account (SAS tokens can be used to limit the operations, time, and scope of access but that won’t be relevant when we talk the OpenAI service). Not ideal right? The second method is the recommended method and that involves AAD authentication. Here the security principal authenticates to AAD, receives an access token, and is then authorized for the operation via Azure RBAC. Remember, these are operations against the data hosted within the resource.
Authentication in Management Plane vs Data Plane in Azure Storage
Now why did I give you a 101 on Azure Storage authentication? Well, because the Azure OpenAI Service works in a very similar way.
Let’s first talk about the management plane of the Azure OpenAI Service. Like Azure Storage (and the rest of Azure’s services) it is administered through the ARM API behind the management.azure.com endpoint. Customers will use the management plane when they want to create an instance of the Azure OpenAI Service, switch it from a PMK to CMK, or setup diagnostic settings to redirect logs (I’ll cover logging in a future post). All of these operations will require authentication to AAD and authorization via Azure RBAC (I’ll cover authorization in a future post).
Simple right? Now let’s move to the complexity of the data plane.
Two API keys are created whenever a customer creates an Azure OpenAI Service instance. These API keys allow the customer full access to all data plane operations. These operations include managing a deployment of a model, managing training data that has been uploaded to the service instance and used to fine tune a model, managing fine tuned models, and listing available models. These operations are performed against the Azure OpenAI Service API which lives behind a unique label with an FQDN of openai.azure.com (such as myservice.openai.azure.com). Pretty much all the stuff you would be doing through the Azure OpenAI Studio. If you opt to use these keys you’ll need to remember control access to these keys via securing management plane authorization aka Azure RBAC.
Azure OpenAI Service API Keys
In the above image I am given the option to regenerate the keys in the case of compromise or to comply with my organization’s key rotation process. Two keys are provided to allow for continued access to the service while other key is being rotated.
Here I have simple bit of code using the OpenAI Python SDK. In the code I provide a prompt to the model and ask it to complete it for me and use one of the API keys to authenticate to it.
The model gets creative and provides me with the response below.
If you look closely you’ll notice an warning about the security of my session. The reason I’m getting that error is shut off certificate verification in the OpenAI library in order to intercept the calls with Fiddler. Now let me tell you, shutting off certificate verification was a pain in the ass because the developers of the SDK are trying to protect users from the bad guys. Long story short, the Azure Python SDK doesn’t provide an option to turn off certificate checking like say the Azure Python SDK (which you can pass a kwarg of verify=False to turn it off in the request library used underneath). While the developers do provide a property called verify_ssl_certs, it doesn’t actually do anything. Since most Python SDKs use the requests library underneath the hood, I went through the library on my machine and found the api_requestor.py file. Within this file I modified the _make_session function which is creating a requests Sessions object. Here I commented out the developers code and added the verify=False property to the Session object being created.
Turning off certificate verification in OpenAI Python SDK
Now don’t go and do this in any environment that matters. If you’re getting a certificate verification failure in your environment you should be notifying your information security team. Certificate verification is an absolute must to ensure the identity of the upstream server and to mitigate the risk of man-in-the-middle attacks.
Once I was able to place Fiddler in the middle of the HTTPS session I was able to capture the conversation. In the screenshot below, you can see the SDK passing the api-key header. Take note of that header name because it will become relevant when we talk AAD authentication. If you’re using OpenAI’s service already, then this should look very familiar to you. Microsoft was nice enough to support the existing SDKs when using one of the API keys.
At this point you’re probably thinking, “That’s all well and good Matt, but I want to use AAD authentication for all the security benefits AAD provides over a static key.” Yeah yeah, I’m getting there. You can’t blame me for nerding out a bit with Fiddler now can you?
Alright, so let’s now talk AAD authentication to the data plane of the Azure OpenAI Service. Possible? Yes, but with some caveats. The public documentation illustrates an example of how to do this using curl. However, curl is great for a demonstration of a concept, but much more likely you’ll be using an SDK for your preferred programming language. Since Python is really the only programming language I know (PowerShell doesn’t count and I don’t want to show my age by acknowledging I know some Perl) let me demonstrate this process using our favorite AAD SDK, MSAL.
For this example I’m going to use a service principal, but if your code is running in Azure you should be using a managed identity. When creating the service principal I granted it the Cognitive Services User RBAC role on the resource group containing the Azure OpenAI Service instance as suggested in the documentation. This is required to authorize the service principal access to data plane operations. There are a few other RBAC roles for the service, but as I said earlier, I’ll cover authorization in a future post. Once the service principal was created and assigned the appropriate RBAC role, I modified my code to include a function which calls MSAL to retrieve an access token with the access scope of Cognitive Services, which the Azure OpenAI Service falls under. I then pass that token as the API key in my call to the Azure OpenAI Service API.
import logging
import sys
import os
import openai
from msal import ConfidentialClientApplication
def get_sp_access_token(client_id, client_credential, tenant_name, scopes):
logging.info('Attempting to obtain an access token...')
result = None
print(tenant_name)
app = ConfidentialClientApplication(
client_id=client_id,
client_credential=client_credential,
authority=f"https://login.microsoftonline.com/{tenant_name}",
)
result = app.acquire_token_for_client(scopes=scopes)
if "access_token" in result:
logging.info('Access token successfully acquired')
return result['access_token']
else:
logging.error('Unable to obtain access token')
logging.error(f"Error was: {result['error']}")
logging.error(f"Error description was: {result['error_description']}")
logging.error(f"Error correlation_id was: {result['correlation_id']}")
raise Exception('Failed to obtain access token')
def main():
# Setup logging
try:
logging.basicConfig(
level=logging.ERROR,
format='%asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[logging.StreamHandler(sys.stdout)]
)
except:
logging.error('Failed to setup logging: ', exc_info=True)
try:
# Obtain an access token
token = get_sp_access_token(
client_id = os.getenv('CLIENT_ID'),
client_credential = os.getenv('CLIENT_SECRET'),
tenant_name = os.getenv('TENANT_ID'),
scopes = "https://cognitiveservices.azure.com/.default"
)
except:
logging.error('Failed to obtain access token: ', exc_info=True)
try:
# Setup OpenAI Variables
openai.api_type = "azure"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_version = "2022-12-01"
openai.api_key = token
response = openai.Completion.create(
engine=os.getenv('DEPLOYMENT_NAME'),
prompt='Once upon a time'
)
print(response.choices[0].text)
except:
logging.error('Failed to summarize file: ', exc_info=True)
if __name__ == "__main__":
main()
Let’s try executing that and see what happens.
Uh-oh! What happened? If you recall from earlier the API key is passed in the api-key header. However, to use the access token provided by AAD we have to pass it in the authorization header as seen in the example in Microsoft public documentation.
curl ${endpoint%/}/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-12-01 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $accessToken" \
-d '{ "prompt": "Once upon a time" }'
Thankfully there is a solution to this one without requiring you to modify the OpenAI SDK. If you take a look in the api_requestor.py file again in the library you will see it provides the ability to override the headers passed in the request.
With this in mind, I made a few small modifications. I removed the api_key property and added an Authorization header to the request to the Azure OpenAI Service API which includes the access token received back from AAD.
import logging
import sys
import os
import openai
from msal import ConfidentialClientApplication
def get_sp_access_token(client_id, client_credential, tenant_name, scopes):
logging.info('Attempting to obtain an access token...')
result = None
print(tenant_name)
app = ConfidentialClientApplication(
client_id=client_id,
client_credential=client_credential,
authority=f"https://login.microsoftonline.com/{tenant_name}",
)
result = app.acquire_token_for_client(scopes=scopes)
if "access_token" in result:
logging.info('Access token successfully acquired')
return result['access_token']
else:
logging.error('Unable to obtain access token')
logging.error(f"Error was: {result['error']}")
logging.error(f"Error description was: {result['error_description']}")
logging.error(f"Error correlation_id was: {result['correlation_id']}")
raise Exception('Failed to obtain access token')
def main():
# Setup logging
try:
logging.basicConfig(
level=logging.ERROR,
format='%asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[logging.StreamHandler(sys.stdout)]
)
except:
logging.error('Failed to setup logging: ', exc_info=True)
try:
# Obtain an access token
token = get_sp_access_token(
client_id = os.getenv('CLIENT_ID'),
client_credential = os.getenv('CLIENT_SECRET'),
tenant_name = os.getenv('TENANT_ID'),
scopes = "https://cognitiveservices.azure.com/.default"
)
except:
logging.error('Failed to obtain access token: ', exc_info=True)
try:
# Setup OpenAI Variables
openai.api_type = "azure"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_version = "2022-12-01"
response = openai.Completion.create(
engine=os.getenv('DEPLOYMENT_NAME'),
prompt='Once upon a time',
headers={
'Authorization': f'Bearer {token}'
}
)
print(response.choices[0].text)
except:
logging.error('Failed to summarize file: ', exc_info=True)
if __name__ == "__main__":
main()
Running the code results in success!
4/3/2023 Update – Poking around today looking at another aspect of the service, I came across this documentation on an even simpler way to authenticate with Azure AD without having to use an override. In the code below, I specify an openai.api_type of azure_ad which allows me to pass the token direct via the openai_api_key property versus having to pass a custom header. Definitely a bit easier!
import logging
import sys
import os
import openai
from msal import ConfidentialClientApplication
def get_sp_access_token(client_id, client_credential, tenant_name, scopes):
logging.info('Attempting to obtain an access token...')
result = None
print(tenant_name)
app = ConfidentialClientApplication(
client_id=client_id,
client_credential=client_credential,
authority=f"https://login.microsoftonline.com/{tenant_name}",
)
result = app.acquire_token_for_client(scopes=scopes)
if "access_token" in result:
logging.info('Access token successfully acquired')
return result['access_token']
else:
logging.error('Unable to obtain access token')
logging.error(f"Error was: {result['error']}")
logging.error(f"Error description was: {result['error_description']}")
logging.error(f"Error correlation_id was: {result['correlation_id']}")
raise Exception('Failed to obtain access token')
def main():
# Setup logging
try:
logging.basicConfig(
level=logging.ERROR,
format='%asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[logging.StreamHandler(sys.stdout)]
)
except:
logging.error('Failed to setup logging: ', exc_info=True)
try:
# Obtain an access token
token = get_sp_access_token(
client_id = os.getenv('CLIENT_ID'),
client_credential = os.getenv('CLIENT_SECRET'),
tenant_name = os.getenv('TENANT_ID'),
scopes = "https://cognitiveservices.azure.com/.default"
)
print(token)
except:
logging.error('Failed to obtain access token: ', exc_info=True)
try:
# Setup OpenAI Variables
openai.api_type = "azure_ad"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = token
openai.api_version = "2022-12-01"
response = openai.Completion.create(
engine=os.getenv('DEPLOYMENT_NAME'),
prompt='Once upon a time '
)
print(response.choices[0].text)
except:
logging.error('Failed to summarize file: ', exc_info=True)
if __name__ == "__main__":
main()
Let me act like I’m ChatGPT and provide you a summary of what we learned today.
The Azure OpenAI Service has both a management plane and data plane.
The Azure OpenAI Service data plane supports two methods of authentication which include static API keys and Azure AD.
The static API keys provide full permissions on data plane operations. These keys should be rotated in compliance with organizational key rotation policies.
The OpenAI SDK for Python (and I’m going to assume the others) sends an api-key header by default. This behavior can be overridden to send an Authorization header which includes an access token obtained from Azure AD.
It’s recommended you use Azure AD authentication where possible to leverage all the bells and whistles of Azure AD including the usage of managed identities, improved logging, and conditional access for service principal-based access.
Well folks, that concludes this post. I’ll be uploading the code sample above to my GitHub later this week. In the next batch of posts I’ll cover the authorization and logging aspects of the service.
I hope you got some value and good luck in your AI journey!
The past few months have been crazy busy. My customer load has doubled and customers who went into hibernation for holidays have decided to wake up in full force. With that new demand comes interesting new use cases and blog topics.
Unless you’ve been living under a rock, you’re well aware of the insane amount of innovation and technical developments in the AI space. It seems every day there’s 10 articles on OpenAI’s models (hilarious South Park episode on ChatGPT recently). Microsoft decided to dive straight into the deep end and formed a partnership with OpenAI. Out of this partnership came the Azure OpenAI Service which runs OpenAI models like ChatGPT on Azure infrastructure. As you can imagine, this offering has big appeal to new and existing Azure customers.
Given the demand I was seeing within my own customers, I decided to take a look at the security controls (or infra/security stuff as one of my data counterparts calls it) available within the service. Before jumping into the service, I did some basic experimentation with the OpenAI’s own service using this wonderful tutorial by the Part Time Larry. I found his step-by-step walkthrough of some of the sample code to be absolutely stellar in understanding just how simple it is to interact with the service.
With a very basic (and I do stress basic) understanding of how to interact with OpenAI’s API, I decided upon a use case. The use case I decided upon was to use the summarization feature he davinci GPT-3 model to summarize the NIST document on Zero Trust. I was interested in which key points it would extract from document and whether those would align with what I drew from the document after reading through it fully (re-reading the doc is still in my todo list!).
Before I could do any of the cool stuff I had to get onboarded to the service. At this time, customers must request their subscriptions be onboarded into the service using the process described in Microsoft’s public documentation. While I waited for my subscription to be onboarded, I read through the public documentation with a focus on the “infra/security” stuff. Like most of the data services in Azure, the information on the levers customers can pull around security controls like network, encryption-at-rest, and identity were very high level and not very useful. Lots of mentions of words, but no real explanation of those features would “look” when enabled in the service. There is also the matter of how Microsoft is handling and securing the data the customer data for the service.
Like every cloud provider, Microsoft operates within the shared responsibility model where Microsoft is responsible for the security of the cloud and you, the customer, are responsible for security within the cloud. Simply put, there are controls Microsoft manages behind the scenes and there are controls Microsoft puts in the customer’s hands and it’s on the customer to enable those controls. Microsoft describes how the data is processed and secured for the Azure OpenAI Service in the public documentation. Customers should additionally review the Microsoft Products and Services Data Protection Addendum and specific product terms. Another great resource to review is documentation within the Microsoft Services Trust Portal. In the Trust Portal you can find all the compliance-related documentation such as the SOC-2 Type II which will provide detail as to Microsoft’s processes and controls it uses to protect data. For a much deeper dive, you can review the FedRAMP SSP (System Security Plan). I typically find myself scanning through the SOC2 first and then very often diving deeper by reading through the relevant sections in the FedRAMP SSP. I’ll let you read through and consume the documentation above (and you should be doing that for every service you consume). For the purposes of this blog post, I’m going to look at the “security within the cloud”.
I’m a big fan of taking a step back and looking at things from a high level architectural view. After reading through documentation, I envisioned the following Azure components being they key components required in any implementation of the service within a regulated industry.
Azure OpenAI Azure Components
Let’s walk through each of these components.
The first component is the Azure OpenAI Service instance which is service under the Cognitive Services umbrella. Azure Cognitive Services includes existing services like speech-to-text, image analysis, and the like. This was a great idea by Microsoft because it would allow the Product Group (PG) managing the Azure OpenAI Service to leverage existing architectural standards already adopted for other services under the Cognitive Services umbrella.
The next component is the Azure Key Vault instance. Within an instance of Azure OpenAI Service there are three types of data that could be stored within a customer’s instance of the service. I say could because this data is only stored if you choose to use specific features and capabilities of the service. This data includes training data you may provide to fine-tune models, the fine-tuned models themselves, and prompts and completions. Training data is only stored if you opt to train your own fine-tuned models and the training data can be removed as soon as you finish training your fine-tuned model. From talking to my much smarter peers, there is a very low percentage of customers that will need to create fine-tuned models. I’ve heard as low as 1% of customers will need to do this since the included models are already trained very effectively. Prompts and completions are by default stored for 30 days for human evaluation to ensure the models are being used in an inappropriate way. Customers have the option to opt out of the content filtering using the process outlined in this piece of public documentation. If they opt out, this data is never stored.
If the customer opts to use a feature that creates this data, then the data is encrypted-at-rest by default with Microsoft-managed keys when stored within the Microsoft-managed boundary. This means that Microsoft manages the authorization and rotation of the keys. Many regulated customers have regulatory requirements or internal policies that require the customer to manage authorization and rotation of any keys used to encrypt data in their environment. For that reason, cloud providers such as Microsoft provide the option to use CMKs (Customer Managed Keys). In Azure, these CMKs are stored within an Azure Key Vault instance within a customer’s subscription and the customer controls authorization and access to the keys.
The Azure OpenAI Service supports the use of CMKs to protect at least two out of three of these sets of data. The documentation is unclear as to whether the prompts and completions can be encrypted with CMKs. If you happen to know, let me know in the comments. Take note that for now you need to request access to get your subscription approved for CMKs with the Azure OpenAI Service.
Next up we have virtual networks, private endpoints and Azure Private DNS. Like the rest of the services in the Cognitive Services umbrella, the OpenAI service supports private endpoints as a means to lock down network access to your private IP space. The DNS namespace for the service is privatelink.openai.azure.com. Best practice would have you hosting this zone in Azure Private DNS which we’ll see later on when I share a sample architecture. It is worth noting that the Azure Open AI Service also supports what I refer to as the service firewall. This allows you to limit access to the service to a specific set of public IPs (such as your enterprise’s forward web proxy) or to a specific virtual network via a Service Endpoint.
Next, we have Azure Storage. If you choose to build a fine tuned model training data can be uploaded to an Azure Storage Account within the customer’s subscription. The customer’s instance of the Azure OpenAI Service can then retrieve the data using a method I will explain later in this post.
We then have managed identities and Azure RBAC. For the service, managed identities are used to access the CMKs stored in the customer Key Vault instance. Azure RBAC will be used to control access to the Azure OpenAI Services instance and keys used to call the service APIs.Stepping back and looking at the components above and how they fit together to provide security controls across identity, network, encryption, and encryption, I see it like the below.
For the Azure OpenAI Service instance running the models, you lock down the service using Azure RBAC. Authentication to the service is supported through a set of API keys which you will need to manage rotation of. Optionally, (I haven’t tested this myself), you can use Azure AD authentication to obtain a bearer token to authenticate to the service. You secure network access by restricting access to the service using private endpoints. Data is optionally encrypted with CMKs stored in a customer-managed Key Vault instance to enable the customer to control access to the keys, rotate keys, and audit usage of those keys. The Azure OpenAI Service also offers logs and metrics which can be delivered to Azure Storage, a Log Analytics Workspace, or an Event Hub via the diagnostics settings configured on the instance. The security specific logs you’ll be interested in are the audit logs and potentially the prompt and completion logs.
The Azure Key Vault instance used when customers opt to use CMKs can have access to the keys controlled using Azure RBAC (when using a Key Vault instance enabled for Azure RBAC vault policies) and managed identities. The Azure OpenAI Service instance will access the CMK using the managed identity assigned to the service. Take note that as of today, you cannot use the Key Vault service firewall to restrict network access. Azure Cognitive Services is not considered a Trusted Azure Service for Key Vault and thus can’t be allowed network access when the service firewall is enabled.
If the customer chooses to store training data in an Azure Storage Account before uploading to the service, the account can be secured for user access with Azure RBAC or SAS tokens. Since SAS tokens are a nightmare to manage for humans, you’ll want to control access to the data for humans using Azure RBAC. The Azure OpenAI Service itself does not support the use of a managed identity for access of Azure Storage today. This means you’ll need to secure the data using a SAS token for non-human access of the data during upload. Since the Azure OpenAI Service does not yet support a managed identity for access to Azure Storage, it cannot take advantage of the service instance authorization rules. Allowing just the trusted services for Azure Storage doesn’t seem to work either in my testing. This means that you’ll need to allow all public network access to the storage account. Your means to secure that data will be SAS tokens largely for the access coming from the Azure OpenAI Service. Not ideal, but hey, the service is very new.
So putting everything together than we’ve learned, what could this look like architecturally?
Azure OpenAI Service Sample Architecture
Above is an example architecture that is common in regulated organizations that have adopted Azure VWAN. In this pattern, all service instances related to the deployment would be placed in a dedicated workload subscription as indicated by the orange outline. This includes the virtual network containing the Azure OpenAI Service private endpoint, the Azure OpenAI Service instance, user-assigned managed identity used by the Azure OpenAI Service instance, the workload key vault containing the CMK used to encrypt the data held by the Azure OpenAI Service, and the Azure Storage Account used to stage training data to be uploaded to the service.
The Azure OpenAI Service would have its network access secured to the private endpoint. Both the Azure Key Vault instance and Storage Account would have their network access open to public networks. Access to the data for Azure Key Vault would be secured with Azure AD authentication and Azure RBAC vault policies for authorization. The Azure Storage account would use Azure AD authentication and Azure RBAC to control access for human users and SAS tokens to control access from the Azure OpenAI Service instance.
Lastly, although not listed in the images, it should go without saying that Azure Policy should be put in place to ensure all of the resources look the way you and your security team has decided the resources need to look.
As the service grows and matures, I expect some of these gaps in network controls to be addressed through support for managed identities to access storage accounts and the addition of the service to Azure Key Vault’s trusted services. I also wouldn’t be surprised to see some type of VNet-injection or VNet-integration to be introduced similar to what is available in Azure Machine Learning.
Well folks, I hope this helped you infra and security folks do your “infra/security stuff” for the day and you now better understand some of the levers and switches you have available to you to secure the service. As I progress in my learning of the service and AI in general, I plan on adding some posts which will walk through the implementation in action doing a deeper dive how this architecture looks when implemented. I have it running in my demo environment, but time is a very limited thing these days.
Thanks folks and I hope your journey into AI has been as fun as mine has been so far!