The Challenge of Logging Azure OpenAI Stream Completions

This is part of my series on GenAI Services in Azure:

  1. Azure OpenAI Service – Infra and Security Stuff
  2. Azure OpenAI Service – Authentication
  3. Azure OpenAI Service – Authorization
  4. Azure OpenAI Service – Logging
  5. Azure OpenAI Service – Azure API Management and Entra ID
  6. Azure OpenAI Service – Granular Chargebacks
  7. Azure OpenAI Service – Load Balancing
  8. Azure OpenAI Service – Blocking API Key Access
  9. Azure OpenAI Service – Securing Azure OpenAI Studio
  10. Azure OpenAI Service – Challenge of Logging Streaming ChatCompletions
  11. Azure OpenAI Service – How To Get Insights By Collecting Logging Data
  12. Azure OpenAI Service – How To Handle Rate Limiting
  13. Azure OpenAI Service – Tracking Token Usage with APIM
  14. Azure AI Studio – Chat Playground and APIM
  15. Azure OpenAI Service – Streaming ChatCompletions and Token Consumption Tracking
  16. Azure OpenAI Service – Load Testing

Updates:

Hello again fellow geeks. Today I’m back with another Azure OpenAI Service (AOAI) post. I’ve talked in the past about the gaps in the native logging for the AOAI service and how the logs lack traceability and details on token usage to be used for chargebacks. I was lucky enough to work with Jake Wang and others on a reference architecture that could address these gaps using Azure API Manager (APIM). I also wrote some custom APIM policies to provide examples for how this information could be captured within APIM. I’ve observed customers coming up with creative solutions such as capturing the data within the application sitting in front of AOAI as a tactical means to get this data while more strategically using third-party API Gateway products such as Apigee, or even building custom highly functional and complex gateways. However, there was a use case that some of these solutions (such as the custom policies I wrote) didn’t account for, and that was streaming completions.

Like OpenAI’s API, the AOAI service API offers support for streaming chat completions. Streaming completions return the model’s completion as a series as events as the tokens are processed versus a non-streaming completion which returns the entire completion once the model is finished processing. The benefit of a streaming completion is a better user experience. There have been studies that show that any delay longer than 10 seconds won’t hold user attention. By streaming the completion as it’s generated the user is receiving that feedback that the website is responding.

Streaming Chat Completion

The OpenAI documentation points out a few challenges when using streaming completions. One of those challenges is the response from the API no longer includes token usage, which means you need to calculate token usage by some other means such as using OpenAI’s open source tokeniser tiktoken. It also makes it difficult to moderate content because only partial completions are received in each event. Outside of those challenges, there is also a challenge when using APIM. As my peer Shaun Callighan points out, Microsoft does not recommend logging the request/response body when dealing with a stream of server-events such as the API is returning with streaming chat completions because it can cause unexpected buffering (which it does with streaming chat completions). This means the application user will not get the behavior the application owner intended them to get. In my testing, nothing was returned until model finished the completion.

If using the Python SDK, you can make a chat completion streaming by adding the stream=true property to the ChatCompletion object as seen below.

        response = openai.ChatCompletion.create(
            engine=DEPLOYMENT_NAME,
            messages=[
                {
                   "role": "user",
                   "content": "Write me a bedtime story"
                }
            ],
            max_tokens=300,
            stream=True
        )

The body of the response includes a series of server-events such as the below.

...
data: {"id":"chatcmpl-8JNDagQPDWjNWOgbUm9u5lRxcmzIw","object":"chat.completion.chunk","created":1699628174,"model":"gpt-35-turbo","choices":[{"index":0,"finish_reason":null,"delta":{"content":"Once"}}],"usage":null}
data: {"id":"chatcmpl-8JNDagQPDWjNWOgbUm9u5lRxcmzIw","object":"chat.completion.chunk","created":1699628174,"model":"gpt-35-turbo","choices":[{"index":0,"finish_reason":null,"delta":{"content":" upon"}}],"usage":null}
data: {"id":"chatcmpl-8JNDagQPDWjNWOgbUm9u5lRxcmzIw","object":"chat.completion.chunk","created":1699628174,"model":"gpt-35-turbo","choices":[{"index":0,"finish_reason":null,"delta":{"content":" a"}}],"usage":null}
...

So how do you deal with this if you are or were planning to use APIM for logging, load balancing, authorization, and throttling? You have a few options.

  1. You can move logging into the application and use APIM only for load balancing, authorization, and throttling.
  2. You can insert a proxy logging solution behind APIM to handle logging of both streaming and non-streaming completions and use APIM only for load balancing, authorization, and throttling.
  3. You can block streaming completions at APIM.

Option 1

Option 1 is workable at a small scale and is a good tactical solution if you need to get something out to production quickly. The challenge with this option is enforcing it at scale. If you have amazing governance within your organization and excellent SDLC maybe you can enforce this. In my experience, few organizations have the level of maturity needed for this. The other problem with this is ideally logging for the purposes of compliance should be implemented and enforced by another entity to ensure separation of duties.

Benefits

  1. Quick and easy to put in place.

Considerations

  1. Difficult to enforce at scale.
  2. Puts the developers in charge of enforcing logging on themselves. Could be an issue with separation of duties.

Option 2

Option 2 is an interesting solution that my peer Shaun Callighan came up. In Shaun’s architecture a proxy-type solution is placed between APIM and AOAI and that solution handles parsing the requests and responses, calculating token usage, and logging the information to an Event Hub. They have even been kind enough to provide a sample solution demonstrating how this could be done with an Azure Function.

Benefits

  1. Allows you to use continue using APIM for the benefits around load balancing, authorization, and throttling.
  2. Supports streaming chat completions.
  3. Provides the logging necessary for compliance and chargebacks for both streaming and non-streaming chat completions.
  4. Centralized enforcement of logging.

Considerations

  1. You will need to develop your own code to parse the responses/responses, calculate chargebacks, and deliver the logs to Event Hub. (You could use Shaun’s code as a starting point)
  2. You’ll need to ensure this proxy does not become a bottleneck. It will need to scale as requests to the AOAI instance scale along with APIM and whatever else you have in path of the user’s request.

Option 3

Option 3 is another valid option (and honestly a simple fix IMO) and may be where some customers end up in the near term. With this option you block the use of streaming completions at APIM with a custom policy snippet like below. If the developers are worried about the user experience, there is always the option to flash a “processing”-like message in the text window while the model processes the completion.

Benefits

  1. Allows you to continue using APIM for logging, load balancing, throttling, and authorization.
  2. No new code introduced.
  3. Centralized enforcement of logging.
  4. No additional bottlenecks.

Considerations

  1. Your developers may hate you for this.
  2. There may be a legitimate use case where stream chat completions are required.

Since Shaun has a proof-of-concept example for option 2, I figured I’d showcase a sample APIM policy snippet for option 3. In the APIM policy snippet below, I determine if the stream property is included in the request body and store the value in a variable (it will be true or false). I then check the variable to see if the value is true, and if so I return a 404 status code with the message that streaming chat completions are not allowed.

        <!-- Capture the value of the streaming property if it is included -->
        <choose>
            <when condition="@(context.Request.Body.As<JObject>(true)["stream"] != null && context.Request.Body.As<JObject>(true)["stream"].Type != JTokenType.Null)">
                <set-variable name="isStream" value="@{
                    var content = (context.Request.Body?.As<JObject>(true));
                    string streamValue = content["stream"].ToString();
                    return streamValue;
                }" />
            </when>
        </choose>
        <!-- Blocks streaming completions and returns 404 -->
        <choose>
            <when condition="@(context.Variables.GetValueOrDefault<string>("isStream","false").Equals("true", StringComparison.OrdinalIgnoreCase))">
                <return-response>
                    <set-status code="404" reason="BlockStreaming" />
                    <set-header name="Microsoft-Azure-Api-Management-Correlation-Id" exists-action="override">
                        <value>@{return Guid.NewGuid().ToString();}</value>
                    </set-header>
                    <set-body>Streaming chat completions are not allowed by this organization.</set-body>
                </return-response>
            </when>
        </choose>

If you ignore streaming chat completions and try to use a policy such as this one, the model will complete the completion but APIM will throw a 500 status code back at the developer because the structure of a streaming response doesn’t look like the structure of a non-streaming response and it can’t be parsed using that policy’s logic. This means you’ll be throwing money out of the window and potentially struggling with troubleshooting root cause. TLDR, pick an option above to deal with streaming and get it in place if you’re using APIM for logging today or plan to.

Last but not least, I want to link to a wonderful policy snippet by Shaun Callighan. This policy snippet dumps the trace logs from APIM into the headers returned in the response from APIM. This is incredibly helpful when troubleshooting a 500 status code returned by APIM.

Well folks, that wraps up this short blog post on this Friday afternoon. Have a great weekend and happy holidays!

APIM and Azure OpenAI Service – Azure AD

This is part of my series on GenAI Services in Azure:

  1. Azure OpenAI Service – Infra and Security Stuff
  2. Azure OpenAI Service – Authentication
  3. Azure OpenAI Service – Authorization
  4. Azure OpenAI Service – Logging
  5. Azure OpenAI Service – Azure API Management and Entra ID
  6. Azure OpenAI Service – Granular Chargebacks
  7. Azure OpenAI Service – Load Balancing
  8. Azure OpenAI Service – Blocking API Key Access
  9. Azure OpenAI Service – Securing Azure OpenAI Studio
  10. Azure OpenAI Service – Challenge of Logging Streaming ChatCompletions
  11. Azure OpenAI Service – How To Get Insights By Collecting Logging Data
  12. Azure OpenAI Service – How To Handle Rate Limiting
  13. Azure OpenAI Service – Tracking Token Usage with APIM
  14. Azure AI Studio – Chat Playground and APIM
  15. Azure OpenAI Service – Streaming ChatCompletions and Token Consumption Tracking
  16. Azure OpenAI Service – Load Testing

Hello folks!

I’m back with another entry on the Azure OpenAI Service (AOAI). In my previous posts, I’ve focused on the native security features that Microsoft provides to its customers to secure their instance of the service. However, in this post, I’ll be taking a slightly different approach. I’ll be walking you through a pattern that can be used to supplement those native features using Azure API Management (APIM)

For those who are unfamiliar with APIM, it is Azure’s API Gateway PaaS (platform-as-a-service) offering. Like any good API Gateway, it provides an abstraction layer away from backend APIs, which allows you to add additional authentication/authorization controls, throttling, transform requests, and log information from the requests and responses. In this post, I’ll be covering how the authentication/authorization controls can be used to supplement what is provided natively in AOAI. 

I’ve covered authentication in the AOAI in a previous post, refer to that post for the gory details. For the purposes of this post, you need to understand at the data plane it supports both Azure AD authentication/Azure RBAC authorization and authentication with two API keys created when the service is instantiated.

Azure OpenAI Service Authentication and Authorization

To my knowledge, there is no way to disable the usage of API keys. Moreover, as I’ve discussed in my logging post, it is extremely difficult to trace back to what is using the API keys because the source IP address is masked and the calls aren’t associated with specific API keys or Azure AD identities. This makes it critically important to control who has access to the API keys. In my post on authorization within the service, I cover this conversation in more detail, and yes, it can be done with Azure RBAC.

Sample log entry from Azure Open AI Service


Controlling access should be your first priority. However, wouldn’t it be great to restrict access to the service to Azure AD authentication only? This is where APIM comes in. APIM is placed between the application calling the AOAI service and the AOAI service. This establishes a man-in-the-middle scenario where APIM can analyze and modify the request and responses between the application and AOAI service.

APIM and AOAI Data Flow

The image above is an example of this pattern. Here, the calling application is provisioned with either a service principal (running outside of Azure) or a managed identity (running within Azure or integrated with Azure Arc). Instead of pointing the application directly to the Azure OpenAI Service, it is pointed to a custom domain configured on the APIM instance, and the APIM instance is configured to front the Azure OpenAI Service API. My peer Jake Wang put together some wonderful instructions on how to set this piece up in this repository.

Once APIM is set up to pass traffic along to the AOAI service, a custom APIM policy can be introduced to start controlling access. Since the goal is to limit access to the AOAI service to applications using an Azure AD identity, the validate-jwt policy can be used. This policy captures and extracts the JSON Web Token (bearer token) and parses the content within it to verify that the token was issued by the issuer specified in the policy. 

The policy would be structured as shown below. In this policy, any request made to the API must include a JWT issued by the Azure AD tenant (you can find your tenant ID here). Additionally, the policy filters to ensure that the token is intended for the Cognitive Services OAuth scope, which AOAI falls under. If the request doesn’t include the JWT issued by the tenant, the user receives a 403.

<!--
    This sample policy enforces Azure AD authentication and authorization to the Azure OpenAI Service. 
    It limits the authorization tokens issued by the organization's tenant for Cognitive Services.
    The authorization token is passed on to the Azure OpenAI Service ensuring authorization to the actions within
    the service are limited to the permissions defined in Azure RBAC.

    You must provide values for the AZURE_OAI_SERVICE_NAME and TENANT_ID parameters.
-->
<policies>
    <inbound>
        <base />
        <set-backend-service base-url="https://{{AZURE_OAI_SERVICE_NAME}}.openai.azure.com/openai" />
        <validate-jwt header-name="Authorization" failed-validation-httpcode="403" failed-validation-error-message="Forbidden">
            <openid-config url="https://login.microsoftonline.com/{{TENANT_ID}}/v2.0/.well-known/openid-configuration" />
            <issuers>
                <issuer>https://sts.windows.net/{{TENANT_ID}}/</issuer>
            </issuers>
            <required-claims>
                <claim name="aud">
                    <value>https://cognitiveservices.azure.com</value>
                </claim>
            </required-claims>
        </validate-jwt>
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

If you followed the instructions in the repository I linked above, you can enforce this policy for the API you created as seen below.

APIM Policy In Place

Once the policy is in place, you can test it by attempting to authenticate to the APIM API endpoint and specifying an AOAI API key. In the image below, an attempt is made to call the endpoint with an API key.

APIM Denying Request with API Keys

Success! Even though the API key is valid, APIM is rejecting the request before it ever reaches the AOAI instance, preventing the API keys from being used. 

This pattern also passes the bearer token on to the AOAI service, so the RBAC you configure on your AOAI instance will be enforced. In my post on authorization, I provide some guidance on which built-in RBAC roles make since and which permissions you’ll want to carefully distribute.

What’s even cooler is that now that the application is forced to authenticate using Azure AD, the application ID can be extracted. If there are multiple applications hitting the same AOAI instance, different throttling can be applied on a per-application basis instead of having them share one big pool of request/token allowance at the AOAI service level

This can be achieved with a policy similar to the one shown below. This policy looks for specific app IDs in the bearer token and applies different throttling based on the application.

<!--
    This sample policy enforces Azure AD authentication and authorization to the Azure OpenAI Service. 
    It limits the authorization tokens issued by the organization's tenant for Cognitive Services.
    The authorization token is passed on to the Azure OpenAI Service ensuring authorization to the actions within
    the service are limited to the permissions defined in Azure RBAC.

    The sample policy also sets different throttling limits per application id. This is useful when an organization
    has multiple applications consuming the same instance of the Azure OpenAI Service. This sample shows throttling
    rules for two separate applications.

    You must provide values for the AZURE_OAI_SERVICE_NAME, TENANT_ID, and CLIENT_ID_APP parameters. You can add multiple
    lines for as many applications as you need to throttle.
-->
<policies>
    <inbound>
        <base />
        <set-backend-service base-url="https://{{AZURE_OAI_SERVICE_NAME}}.openai.azure.com/openai" />
        <validate-jwt header-name="Authorization" failed-validation-httpcode="403" failed-validation-error-message="Forbidden">
            <openid-config url="https://login.microsoftonline.com/{{TENANT_ID}}/v2.0/.well-known/openid-configuration" />
            <issuers>
                <issuer>https://sts.windows.net/{{TENANT_ID}}/</issuer>
            </issuers>
            <required-claims>
                <claim name="aud">
                    <value>https://cognitiveservices.azure.com</value>
                </claim>
            </required-claims>
        </validate-jwt>
        <choose>
            <when condition="@(context.Request.Headers.GetValueOrDefault("Authorization","").Split(' ').Last().AsJwt().Claims.GetValueOrDefault("appid", string.Empty).Equals("{{CLIENT_ID_APP1}}"))">
                <rate-limit-by-key calls="1" renewal-period="60" counter-key="@(context.Request.Headers.GetValueOrDefault("Authorization","").Split(' ').Last().AsJwt().Claims.GetValueOrDefault("appid", string.Empty))" increment-condition="@(context.Response.StatusCode == 200)" />
            </when>
        </choose>
        <choose>
            <when condition="@(context.Request.Headers.GetValueOrDefault("Authorization","").Split(' ').Last().AsJwt().Claims.GetValueOrDefault("appid", string.Empty).Equals("{{CLIENT_ID_APP2}}"))">
                <rate-limit-by-key calls="10" renewal-period="60" counter-key="@(context.Request.Headers.GetValueOrDefault("Authorization","").Split(' ').Last().AsJwt().Claims.GetValueOrDefault("appid", string.Empty))" increment-condition="@(context.Response.StatusCode == 200)" />
            </when>
        </choose>
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

While the above is impressive, it only works if the application is restricted from direct access to the Azure OpenAI Service. To achieve this, I recommend creating a Private Endpoint for the AOAI service and wrapping a Network Security Group around the subnet (NSGs are now supported for private endpoints) to block access to the resources within the subnet to anything but the APIM instance. Keep in mind that the APIM instance needs to be able to access resources within the virtual network, which means that an APIM needs to be deployed in internal mode. The architecture could look similar to the image below.

APIM and Azure OpenAI Service with Private Networking

One thing to note is that if access is blocked as described above, it will break the AOAI studio feature within the Azure Portal. This is because calls to the data plane of the AOAI service are now blocked. A workaround could be to use a jump host or shared server if you need to continue supporting that feature. However, that opens up the risk that someone could write some code while on that machine and use the API keys. 

Let me sum up what we learned today:

  • APIM policies can be used to enforce Azure AD authentication and can block the use of API keys.
  • You must lock down the Azure OpenAI Service to just APIM to make this effective. Remember this will break access to the Studio within the Azure Portal.
  • Since you’re forcing Azure AD authentication, you can use the application id to add custom throttling.

That’s all for this post. The policy samples used in this blog have been uploaded to this repository on GitHub. Feel free to experiment with them and build upon them. If you end up building upon them and doing anything interesting, do reach out and let me know. I’m always interested in geeking out! In my next post, I’ll cover how to use an APIM policy to create custom logging that can be delivered to an Event Hub and consumed by the upstream service of your choice. Have a great week!