Microsoft Foundry – BYO AI Gateway – Part 3

Microsoft Foundry – BYO AI Gateway – Part 3

Hello once again folks! Today I’m going to add yet another post to my BYO AI Gateway feature of Microsoft Foundry series. In my first post I gave a background on the use case for this feature, in the second post I walked the concepts required to understand the feature, the resources involved in the setup, and the schema of those resource objects. In this post I’m going to walk through the architecture I setup to play with this feature, why I made the choices I did, and dig into some of the actual Terraform code I put together to set this whole thing up. Let’s dive in!

The foundational architecture

When I wanted to experiment with this feature I wanted to test it in an architecture that is typical to my customer base. For this I chose the classic tried and true hub and spoke architecture. I opted out of VWAN and went with a traditional virtual network model because I prefer the visibility and control to that model during experimentation. When the hub becomes a managed VWAN Hub, I get that fancy overlay which makes invisible some of the magic of what is happening underneath. This model enables me to do packet captures at every step and manage routing at a very granular level, which is a must when playing with cutting edge features.

For this setup I have a lab I built out in Terraform which gives me that hub and spoke architecture, centralized DNS resolution, logging, and access to multiple regions. The multiple regions piece of the puzzle is key because feature availability across Foundry features and APIM v2 SKUs are still in flux. The lab also uses three spoke virtual networks. This gives allows me to plop pieces in different spokes to see how things behave and track traffic patterns. It also gives me flexibility when I need to wait for purge operations like when purging a Microsoft Foundry resource configured with a standard agent setup and clearing the lock on the delegated subnet for the VNet injection model. If you’ve mucked around with this you know sometimes it can be 15 minutes and sometimes it can be 2 days.

I drop one of three spokes into one of the “hero” regions. This is a region that gets new features sooner than ours. For example, in this lab I drop it into East US 2 while the hub and other two spokes go in West US 3 (where I’m less likely to run into an quota or capacity issues). East US 2 gives me the option to deploy APIM v2 Standard SKU. In the next section I’ll explain why I’m going with v2 for this experimentation.

Foundational architecture

AI Gateway Architecture

For an AI Gateway I decided to use APIM. My buddy Piotr Karpala has a great repository of 3rd-party AI Gateway solutions if you want to test this with something outside of APIM. I’m going to plop this into the “hero” region spoke in East US 2 to so I can deploy a v2 Standard SKU. The reason I’m using a v2 SKU is it provides another networking model that the classic SKUs do not, and that is Private Endpoint and VNet integration. In this model I block public traffic to the APIM service, create a Private Endpoint to enable private inbound access, and setup VNet integration to a delegated subnet to keep outbound traffic from any of the APIM instances flowing through my virtual network so I can mediate it and optionally inspect it. While the Private Endpoint is only supported for the Gateway and not the Developer Portal, I don’t care in this instance because I don’t plan on using the Developer Portal on an APIM acting as an AI Gateway.

APIM v2 with Private Endpoint and VNet Integration

The reason I picked this networking model for APIM is it makes it easy for me to inject the service into a Microsoft Foundry account configured with a standard agent and the managed virtual network model. In a future post I’ll dive more into the managed virtual network model. For now, just be aware that is exists, it’s in preview, and it doesn’t have many of the limitations the Foundry Agent Service VNet injection model has. There are considerations no doubt, but my personal take is it’s the better of the two strategically.

On the APIM instance I configured two backend objects, one for each Foundry instance. The backends are organized into a pooled backend so I could load balance across the two Foundry instances to maximize my TPM (tokens per minute). I defined four APIs. Two APIs support the Azure OpenAI inferencing and authoring API, one supports the Azure OpenAI v1 API, and the last is a simple custom Hello World API I use to test connectivity. I use two APIs for the Azure OpenAI inferencing and authoring API because one is designed to support APIM as an AI Gateway uses some custom policy snippets and the other is very generic and is used to test model gateway connections from Foundry purely so I’m familiar with the basics of them.

APIM APIs

Foundry Architecture

The Foundry architecture is quite simple. I deployed a single instance of Foundry configured to support standard agents and using a VNet injection model. A subnet is delegated in a different spoke to support the agent vnet injection and supporting Private Endpoints are deployed to a separate subnet in that same virtual network.

The whole setup looks something like the below:

Lab setup

Setting up the AI Gateway

At this point you should have a good understanding of what I’m working with. Let’s talk button pushing. The first thing you’ll need to do is get your AI Gateway setup. To setup the APIM instance I using the Terraform AzureRM and AzApi providers. Like I mentioned above, it was setup as a v2 with the standard SKU public network access disabled, inbound access restricted to private endpoints and outbound access configured for VNet integration. You can find the whole of the code in my lab repository if you’re curious. For the purposes of the post, I’ll only be including the relevant snippets.

One critical thing to take note of is whatever networking model you choose for APIM for this integration, you need to use a certificate issued by a trusted public CA (certificate authority). This is required because at the date of this post, the agent service does not support certificates issued by private CAs. Reason being, you have no ability to inject that root and intermediate certs into the trusted store of the agent compute. For this lab I used the Terraform Acme and Cloudflare providers. It’s actually not bad at all to have a fresh cert provisioned directly as part of the pipeline for labbing and the like, and best part is it’s free for cheap people like myself. There is a sample of that code in the repo.

As I mentioned in my last post, the BYO AI Gateway integration with Foundry supports static or dynamic setup. In the static model you define the models directly in the connection metadata you want to be made available to the connection (see my last post for an example). In the dynamic model the models can be fetched by an API call to the management.azure.com API. This latter option requires additional operations be defined in the API such as what you see below.

## Create an operation to support getting a specific deployment by name when using the Foundry APIM connection
##
resource "azurerm_api_management_api_operation" "apim_operation_openai_original_get_deployment_by_name" {
depends_on = [
azurerm_api_management_api.openai_original
]
operation_id = "get-deployment-by-name"
api_name = azurerm_api_management_api.openai_original.name
api_management_name = azurerm_api_management.apim.name
resource_group_name = azurerm_resource_group.rg_ai_gateway.name
display_name = "Get Deployment by Name"
method = "GET"
url_template = "/deployments/{deploymentName}"
template_parameter {
name = "deploymentName"
required = true
type = "string"
}
}
## Create an operation to support enumerating deployments when using the Foundry APIM connection
##
resource "azurerm_api_management_api_operation" "apim_operation_openai_original_list_deployments_by_name" {
depends_on = [
azurerm_api_management_api_operation_policy.apim_policy_openai_original_get_deployment_by_name
]
operation_id = "list-deployments"
api_name = azurerm_api_management_api.openai_original.name
api_management_name = azurerm_api_management.apim.name
resource_group_name = azurerm_resource_group.rg_ai_gateway.name
display_name = "List Deployments"
method = "GET"
url_template = "/deployments"
}

You then define a policy for that operation to configure it to call the correct endpoint via the ARM API like below. Notice I used the authentication-managed-identity policy snippet to use the APIM managed identity to call the Foundry resource to fetch deployment information. If you’re sharing the API across backends, make sure all backends have all the same models deployed. If not, you’ll need to incorporate some additional logic to hit the backend for each pool to ensure you don’t return models that don’t exist in a specific backend. This will require your APIM instance managed identity to have at least the Azure RBAC Reader role over the Foundry resources.

## Create an policy for the get deployment by name operation to route to the Foundry APIM connection
##
resource "azurerm_api_management_api_operation_policy" "apim_policy_openai_original_get_deployment_by_name" {
depends_on = [
azurerm_api_management_api_operation.apim_operation_openai_original_get_deployment_by_name,
]
api_name = azurerm_api_management_api.openai_original.name
operation_id = azurerm_api_management_api_operation.apim_operation_openai_original_get_deployment_by_name.operation_id
api_management_name = azurerm_api_management.apim.name
resource_group_name = azurerm_resource_group.rg_ai_gateway.name
xml_content = <<XML
<policies>
<inbound>
<authentication-managed-identity resource="https://management.azure.com/" />
<rewrite-uri template="/deployments/{deploymentName}?api-version=${local.ai_services_arm_api_version}" copy-unmatched-params="false" />
<!--Specify a Foundry deployment that has the models deployed -->
<set-backend-service base-url="https://management.azure.com${azurerm_cognitive_account.ai_foundry_accounts[keys(local.ai_foundry_regions)[0]].id}" />
</inbound>
<backend>
<base />
</backend>
<outbound>
<base />
</outbound>
<on-error>
<base />
</on-error>
</policies>
XML
}
## Create an policy for the list deployments operation to route to the Foundry APIM connection
##
resource "azurerm_api_management_api_operation_policy" "apim_policy_openai_original_list_deployments_by_name" {
depends_on = [
azurerm_api_management_api_operation.apim_operation_openai_original_list_deployments_by_name
]
api_name = azurerm_api_management_api.openai_original.name
operation_id = azurerm_api_management_api_operation.apim_operation_openai_original_list_deployments_by_name.operation_id
api_management_name = azurerm_api_management.apim.name
resource_group_name = azurerm_resource_group.rg_ai_gateway.name
xml_content = <<XML
<policies>
<inbound>
<authentication-managed-identity resource="https://management.azure.com/" />
<rewrite-uri template="/deployments?api-version=${local.ai_services_arm_api_version}" copy-unmatched-params="false" />
<!--Azure Resource Manager-->
<set-backend-service base-url="https://management.azure.com${azurerm_cognitive_account.ai_foundry_accounts[keys(local.ai_foundry_regions)[0]].id}" />
</inbound>
<backend>
<base />
</backend>
<outbound>
<base />
</outbound>
<on-error>
<base />
</on-error>
</policies>
XML
}

In my lab, I defined these two operations for both the classic (OpenAI Inferencing and Authoring API) and v1 API. This allowed me to mess around with both static and dynamic APIM and Model Gateway connections.

Once you get Foundry hooked into APIM using this integration (and I’ll cover the Foundry part in the next post), you get access to some pretty neat information in the headers. As of the date of this post, these will be some of the headers you’ll see. You’ll notice my x-forwarded-for path includes my endpoint’s IP address as well as the IP of the container running in the managed Microsoft-compute environment (notice that is using CGNAT IP space which clears up why CGNAT is unsupported to be used by the customer when using agent with VNet injection). The x-ms-foundry-project-id is the unique project GUID of the project the agent was created under (could be useful for throttling and logging). The x-ms-foundry-agent-id is the unique agent identifier of the specific revision of the agent (again useful for logging and throttling). The x-ms-client-request-id is actually the Foundry project managed identity, not the agent identity which is important to note. If you want to use Entra for the BYO AI Gateway APIM connection, you’re going to be limited to this or API key. There is a connection authentication option to use the agent’s actual Entra ID Agent Identity, but I’ve only used that for the MCP Server feature of Foundry, never for this so I’m not sure if it works or is supported.

{
"Authorization": "Bearer REDACTED",
"Content-Length": "474",
"Content-Type": "application/json; charset=utf-8",
"Host": "apimeusXXXXX.azure-api.net",
"Max-Forwards": "10",
"Correlation-Context": "leaf_customer_span_id=173926958944XXXXXX",
"traceparent": "00-62ff160923b2c1724242c037be40e7cb-4f1b402461aXXXXX-01",
"X-Request-ID": "96534855-a35a-481a-886d-XXXXXXXXXXXX",
"x-ms-client-request-id": "76ddf586-260b-4e37-8f4c-XXXXXXXXXXXX",
"openai-project": "sampleproject1",
"x-ms-foundry-agent-id": "TestAgent-ai-gateway-static:5",
"x-ms-foundry-model-id": "conn1apimgwstaticopenai/gpt-4o",
"x-ms-foundry-project-id": "455cbebf-a0bc-425e-99f6-XXXXXXXXXXX",
"x-forwarded-for": "100.64.9.87;10.0.9.213:10095",
"x-envoy-external-address": "100.64.9.87",
"x-envoy-expected-rq-timeout-ms": "1800000",
"x-k8se-app-name": "j8820ec0658b4aeXXXXX-dataproxy--vuww7ja",
"x-k8se-app-namespace": "wonderfulsky-a2fXXXXX",
"x-k8se-protocol": "http1",
"x-k8se-app-kind": "web",
"x-ms-containerapp-name": "j8820ec0658b4aeXXXXX-dataproxy",
"x-ms-containerapp-revision-name": "j8820ec0658b4aeXXXXX-dataproxy--vuww7ja",
"x-arr-ssl": "2048|256|CN=Microsoft Azure RSA TLS Issuing CA 04;O=Microsoft Corporation;C=US|CN=*.azure-api.net;O=Microsoft Corporation;L=Redmond;S=WA;C=US",
"x-forwarded-proto": "https",
"x-forwarded-path": "/v1/https/apimeusXXXXX.azure-api.net/openai/deployments/gpt-4o/chat/completions?api-version=2025-03-01-preview",
"X-ARR-LOG-ID": "76ddf586-260b-4e37-8f4c-XXXXXXXXXXXX",
"CLIENT-IP": "10.0.9.213:10095",
"DISGUISED-HOST": "apimeusXXXXX.azure-api.net",
"X-SITE-DEPLOYMENT-ID": "apimwebappXXXXXX6OTVsZqxOcTZLpubQ9iNmzQ8kzMOmkEhw",
"WAS-DEFAULT-HOSTNAME": "apimwebappXXXXXX6otvszqxoctzlpubq9inmzq8kzmomkehw.apimaseXXXXXXX6otvszqxoctz.appserviceenvironment.net",
"X-AppService-Proto": "https",
"X-Forwarded-TlsVersion": "1.3",
"X-Original-URL": "/openai/deployments/gpt-4o/chat/completions?api-version=2025-03-01-preview",
"X-WAWS-Unencoded-URL": "/openai/deployments/gpt-4o/chat/completions?api-version=2025-03-01-preview",
"X-Azure-JA4-Fingerprint": "t13d1113h2_d3731e0d3936_XXXXXXXXXXXX"
}

Using the information above, I crafted the policy below. It’s nothing fancy, but shows an example of throttling based on the project id and logging the agent identifier via the token metrics policy to potentially make chargeback more granular. Either way, these additional headers give you more to play with.

## Create an API Management policy for the OpenAI v1 API
##
resource "azurerm_api_management_api_policy" "apim_policy_openai_v1" {
depends_on = [
azurerm_api_management_api.openai_v1
]
api_name = azurerm_api_management_api.openai_v1.name
api_management_name = azurerm_api_management.apim.name
resource_group_name = azurerm_resource_group.rg_ai_gateway.name
xml_content = <<XML
<policies>
<inbound>
<base />
<!-- Evaluate the JWT and ensure it was issued by the right Entra ID tenant -->
<validate-jwt header-name="Authorization" failed-validation-httpcode="403" failed-validation-error-message="Forbidden">
<openid-config url="https://login.microsoftonline.com/${var.entra_id_tenant_id}/v2.0/.well-known/openid-configuration" />
<issuers>
<issuer>https://sts.windows.net/${var.entra_id_tenant_id}/</issuer>
</issuers>
</validate-jwt>
<!-- Extract the Entra ID application id from the JWT -->
<set-variable name="appId" value="@(context.Request.Headers.GetValueOrDefault("Authorization",string.Empty).Split(' ').Last().AsJwt().Claims.GetValueOrDefault("appid", "none"))" />
<!-- Extract the Agent ID from the x-ms-foundry-agent-id header. This is only relevant for Foundry native agents -->
<set-variable name="agentId" value="@(context.Request.Headers.GetValueOrDefault("x-ms-foundry-agent-id", "none"))" />
<!-- Extract the project GUID from the x-ms-foundry-project-id header. This is only relevant for Foundry native agents -->
<set-variable name="projectId" value="@(context.Request.Headers.GetValueOrDefault("x-ms-foundry-project-id", "none"))" />
<!-- Extract the Foundry Project name from the "openai-project" header. This is only relevant for Foundry native agents -->
<set-variable name="projectName" value="@(context.Request.Headers.GetValueOrDefault("openai-project", "none"))" />
<!-- Extract the deployment name from the uri path -->
<set-variable name="uriPath" value="@(context.Request.OriginalUrl.Path)" />
<set-variable name="deploymentName" value="@(System.Text.RegularExpressions.Regex.Match((string)context.Variables["uriPath"], "/deployments/([^/]+)").Groups[1].Value)" />
<!-- Set the X-Entra-App-ID header to the Entra ID application ID from the JWT -->
<set-header name="X-Entra-App-ID" exists-action="override">
<value>@(context.Variables.GetValueOrDefault<string>("appId"))</value>
</set-header>
<set-header name="X-Foundry-Agent-ID" exists-action="override">
<value>@(context.Variables.GetValueOrDefault<string>("agentId"))</value>
</set-header>
<set-header name="X-Foundry-Project-Name" exists-action="override">
<value>@(context.Variables.GetValueOrDefault<string>("projectName"))</value>
</set-header>
<set-header name="X-Foundry-Project-ID" exists-action="override">
<value>@(context.Variables.GetValueOrDefault<string>("projectId"))</value>
</set-header>
<choose>
<!-- If the request isn't from a Foundry native agent and is instead an application or external agent -->
<when condition="@(context.Variables.GetValueOrDefault<string>("agentId") == "none" && context.Variables.GetValueOrDefault<string>("projectId") == "none")">
<!-- Throttle token usage based on the appid -->
<llm-token-limit counter-key="@(context.Variables.GetValueOrDefault<string>("appId","none"))" estimate-prompt-tokens="true" tokens-per-minute="10000" remaining-tokens-header-name="x-apim-remaining-token" tokens-consumed-header-name="x-apim-tokens-consumed" />
<!-- Emit token metrics to Application Insights -->
<llm-emit-token-metric namespace="openai-metrics">
<dimension name="model" value="@(context.Variables.GetValueOrDefault<string>("deploymentName","None"))" />
<dimension name="client_ip" value="@(context.Request.IpAddress)" />
<dimension name="appId" value="@(context.Variables.GetValueOrDefault<string>("appId","00000000-0000-0000-0000-000000000000"))" />
</llm-emit-token-metric>
</when>
<!-- If the request is from a Foundry native agent -->
<otherwise>
<!-- Throttle token usage based on the agentId -->
<llm-token-limit counter-key="@($"{context.Variables.GetValueOrDefault<string>("projectId")}_{context.Variables.GetValueOrDefault<string>("agentId")}")" estimate-prompt-tokens="true" tokens-per-minute="10000" remaining-tokens-header-name="x-apim-remaining-token" tokens-consumed-header-name="x-apim-tokens-consumed" />
<!-- Emit token metrics to Application Insights -->
<llm-emit-token-metric namespace="llm-metrics">
<dimension name="model" value="@(context.Variables.GetValueOrDefault<string>("deploymentName","None"))" />
<dimension name="client_ip" value="@(context.Request.IpAddress)" />
<dimension name="agentId" value="@(context.Variables.GetValueOrDefault<string>("agentId","00000000-0000-0000-0000-000000000000"))" />
<dimension name="projectId" value="@(context.Variables.GetValueOrDefault<string>("projectId","00000000-0000-0000-0000-000000000000"))" />
</llm-emit-token-metric>
</otherwise>
</choose>
<choose>
<!-- If the request is from a Foundry native agent -->
<when condition="@(context.Variables.GetValueOrDefault<string>("agentId") != "none" && context.Variables.GetValueOrDefault<string>("projectId") != "none")">
<authentication-managed-identity resource="https://cognitiveservices.azure.com/" />
</when>
</choose>
<set-backend-service backend-id="${module.backend_pool_aifoundry_instances_openai_v1.name}" />
</inbound>
<backend>
<forward-request />
</backend>
<outbound>
<base />
</outbound>
</policies>
XML
}

Summing it up

I was going to go crazy and incorporate the Foundry setup and testing into this post as well but decided against it. There is a point when the brain melts and if mine is already melting, yours may be as well. I’ll walk through those pieces in the next post. You have a few main takeaways. First, let’s review the high level setup of your AI Gateway.

  1. Create your backends that point to the Microsoft Foundry endpoints.
  2. Import the relevant API. If at all possible, go with the v1 API. It will support access to other models besides OpenAI models and additional features.
  3. Add the GET and LIST operations and define the relevant policies if you’re planning on supporting dynamic models vs static. Dynamic seems to make more sense to me, but I haven’t seen enough orgs adopt this yet to form a good opinion.
  4. Craft your custom policies. I highly recommend you regularly review the headers being passed. They could change and even better data may be added to them.

Next, let’s talk about key gotchas.

  1. The certificate used on your AI Gateway MUST be issued from a well-known public CA in order for it to be trusted by the agent running in Foundry comptue. If it isn’t, this integration will fail and may not fail in a way that is obvious the TLS session failure between the agent compute and the AI Gateway is to blame.
  2. If you’re using APIM, think about the Private Endpoint and VNet integration pattern if you’re capable of using v2. If it won’t work for you, or you’re still using the classic SKU, if you want to support managed VNet you’ll need to incorporate an Application Gateway in front of your AI Gateway likely. This means more operational overhead and costs.
  3. While every Foundry Agent (v2) is given an Entra ID Agent Identity created from the Entra ID Agent Blueprint associated to the project, when using the ProjectManagedIdentity authentication type, you’ll see the project’s managed identity in the logs. If you’re able to test with the agent identity authentication type, let me know.
  4. Really noodle on how you can use the project headers for throttling and possibly chargeback. It makes a ton of sense if you’re aligning your Foundry account and project model correctly.

See you next post!

Microsoft Foundry – BYO AI Gateway – Part 2

Microsoft Foundry – BYO AI Gateway – Part 2

Hello again! Today I’m going to continue my series on Microsoft Foundry’s new support for the BYO AI Gateway. In my past few posts I’ve walked through the evolution of Foundry and covered at a high level what an AI Gateway is and the problem this feature solves. In this post we’re gonna get down and dirty with the technical details on setting this up within Microsoft Foundry. I’ll do a follow-up post to focus on the APIM (API Management) configuration. Grab your coffee and put on your thinking music (for me that is some Blink and Third Eye Blind. Yeah, I’m old.).

Let’s get to it!

Current State Architecture

My customer base is primarily in the regulated industry so most of my customers are still at the experimentation state with the Foundry Agent Service. Given these customers have strict security requirements they are largely using the agent service with the standard agent configuration. In this configuration the outbound traffic (subsets of it, but that is a much larger conversation) can be tunneled through the customer virtual network for centralized logging, mediation, and facilitating access to private resources (again, with limitations today) through what the product group calls VNet injection but I’d say is more closely described as VNet integration via a delegated subnet. Threads (conversations in v2 agents) and agent metadata are stored in a Cosmos DB, vector stores created by an agent from tools such as the File Search tool are stored in AI Search, and files uploaded to the Foundry resource by users are stored in a Storage Account. These resources are all provisioned by the customer into the customer subscription and fully managed by the customer (RBAC, encryption, HA settings, etc). Private Endpoints for each resource are created within the customer’s virtual network and made accessible from the agent delegated subnet. The whole environment looks similar to what you see below.

Foundry Agent Service – Standard Agent Configuration with VNet Injection

As I covered in my last post, as of the date of this post Foundry native agents can only consume models deployed to their own Foundry resource. This creates an issue for customers wanting the governance of the models, visibility into the use of the LLMs, and improvements security posture and operational optimizations an AI Gateway can provide when it sits between the agent and the model. For now, customers are working around doing this using what I refer to as external agents. External agents run outside of Microsoft Foundry on customer-managed compute like an on-premises Kubernetes cluster or an Azure Function deployed to the customer subscription. The downfall of this direction is these external agents live on compute customers have to manage and can’t access many of the tools available to Foundry-native agents. This is the problem the BYO AI Gateway feature is attempting to fix.

No BYO Gateway vs BYO Gateway

Foundry resource architecture

Here is where the new connection type introduced in Foundry comes to the rescue. Before I dive into the details of that, I think it’s helpful to level set a bit on the resource hierarchy within Foundry. At the top is the top-level Azure resource referred to as the Foundry service which under the hood is a Cognitive Services account. The relevant resources for this discussion are below the account resource and are projects, deployments, and connections. Projects serve a few purposes with two of them being logical boundaries around connections (at the management plane) and agents (at the data plane) provisioned under the projects. Deployments of models (such as GPT-5) are children of the account and are made available to all projects within the account. The account can also have connections objects which can be shared across projects.

Relevant resource hierarchy

For the purposes of this discussion, I’m going to focus on the connection objects. Connection objects can be created at the account level and project level as discussed above. In the standard agent configuration, you’ll create a number of different connections during setup including connections to Cosmos, AI Search, and Azure Storage. Additional common connections could be to an App Insights instance for tracing or a Grounding With Bing Search resource to use with the Grounding with Bing tool. Connection objects will contain some type of pointer, like a URI and a credential. That credential is usually API Key, some Entra ID-based authentication mechanism, or general OAuth.

Connections are created at the account level when the Foundry account itself needs to access them. This could be for the usage of Content Understanding, to a Key Vault for storing connection secrets (API keys) in a customer subscription, an an App Insights instance used for tracing. From what I’ve observed, you will create connections at the account level if they need to be shared across all projects OR they’re used by the Foundry resource in general vs some type of project construct. Connections used by projects can also be created at the project level. When you provision a standard agent for example, you’ll create connection objects to the Cosmos DB, Storage Account, and AI Search resources mentioned above. The new category of connections for this post will be created at the project level. I’d had mixed behavior with how effectively connection objects at the account can be used downstream by the projects.

APIM and Model Gateway Connections

The BYO AI Gateway feature uses two new types of connection categories: ApiManagement and ModelGateway. These objects are the glue that allow the Foundry native agents to route requests for models through an AI Gateway. When we’re connecting to an APIM instance, you should ideally use the ApiManagement category and when you’re connecting to a third-party category you’ll use the ModelGateway category.

As of the date of this blog post, these connection objects have the following schema (relevant properties to this discussion only):

name: The name of the connection (needs to be less than 60 characters in my testing)
properties: {
category: ApiManagement or ModelGateway
target: The URI you want the agent to connect to
authType: For ApiManagement this can be ApiKey or ProjectManagedIdentity
credentials: This will be populated with the value of the API key if using that authType
isSharedToAll: true or false if you want this shared across all projects
# ApiManagement category with static models
metadata: {
deploymentInPath: true or false
inferenceAPIVersion: API version used for inferencing (not used if using OpenAI v1 API)
# Models discussed in detail below
models: "[{\"name\":\"gpt-4o\",\"properties\":{\"model\":{\"format\":\"OpenAI\",\"name\":\"gpt-4o\",\"version\":\"2024-08-06\"}}}]"
}
# ApiManagement category with dynamic discovery
metadata: {
deploymentAPIVersion: ARM API version for CognitiveServices/accounts/deployments API calls
deploymentInPath: true or false
inferenceAPIVersion: API version used for inferencing (not used if using OpenAI v1 API)
}
# ModelGateway category with static models
metadata: {
deploymentInPath: true or false
inferenceAPIVersion: API version used for inferencing (not used if using OpenAI v1 API)
# Models discussed in detail below
models: "[{\"name\":\"gpt-4o\",\"properties\":{\"model\":{\"format\":\"OpenAI\",\"name\":\"gpt-4o\",\"version\":\"2024-08-06\"}}}]"
}
# ModelGateway category with dynamic models
metadata: {
deploymentInPath: true or false
inferenceAPIVersion: API version used for inferencing (not used if using OpenAI v1 API)
deploymentAPIVersion: ARM API version for CognitiveServices/accounts/deployments API calls
modelDiscovery: "{\"deploymentProvider\":\"AzureOpenAI\",\"getModelEndpoint\":\"/deployments/{deploymentName}\",\"listModelsEndpoint\":\"/deployments\"}"
}

I’ll walk through each of these properties in as much detail as I’ve been able to glean from them with my testing.

The category property is self-explanatory. You either set to this to ApiManagement (if using APIM) or Model Gateway (if using a third-party AI Gateway like a Kong or LiteLLM).

The target property is the URI you want the agent to try to connect to. As an example, if I create an API on my APIM instance for the v1 OpenAPI named openai-v1 my target would look like “https://myapim.azure-api.net/openai-v1/v1&#8221;. As of the date of this blog post, you MUST use the azure-api-net FQDN for the APIM. If you try to do a custom domain you’ll get an error back telling you that it’s not supported. I have a request into the product group to lift this limitation. I’ll update this if that is done. For third-party model gateway, this property serves the same purpose but can be any valid domain.

The authType property is going to be either ApiKey or ProjectManagedIdentity for an APIM connection. ProjectManagedIdentity will authenticate to the upstream APIM using the agent’s project’s Entra ID managed identity. When using ProjectManagedIdentity you must also specify the audience property and set it to cognitive services.azure.com if connecting to a backend Foundry resource hosting models. For a model gateway connection this will either be ApiKey or OAuth. Details on the OAuth setup can be found in the samples GitHub (I haven’t mucked with it yet). If you’re using the authType of ApiKey you additional need to pass the credentials property which includes a property of key with the API key similar to what you see below.

authType: ApiKey
credentials = {
key = MYAPIKEY
}

I haven’t messed extensively with the isSharedToAll property as of yet. For my use case I set this to false so each project got its own connection object. You may be able to create this object at the account level and set the isSharedToAll property, but I haven’t tested that yet. If you have, def let me know if that works.

Ok, now on to the property that can bring the most pain. Here we have the metadata property. This property is going to the main guts that makes this whole thing work. A few considerations, if doing this with Terraform or REST (can’t speak to Bicep or ARM), each of the properties I’m going to cover are CASE SENSITIVE. If you do the wrong casing, your connection object will not work. When connecting to an APIM or model gateway you can have Foundry either enumerate the models available (called dynamic discovery) or you can provide the exact models you want to expose (called static models).

Let’s first cover static models. Here is an example of me creating a connection to an APIM instance with static models using the authType or ProjectManagedIdentity. One thing to note is in my backend object in my APIM I’m appending /v1 to the backend path vs doing it in this connection object.

{
"id": "/subscriptions/X/resourceGroups/X/providers/Microsoft.CognitiveServices/accounts/X/projects/sampleproject1/connections/conn1apimgwstaticopenai-v1",
"name": "conn1apimgwstaticopenai-v1",
"properties": {
"audience": "https://cognitiveservices.azure.com",
"authType": "ProjectManagedIdentity",
"category": "ApiManagement",
"isSharedToAll": false,
"metadata": {
"deploymentInPath": "false",
"inferenceAPIVersion": null,
"models": "[{\"name\":\"gpt-4o\",\"properties\":{\"model\":{\"format\":\"OpenAI\",\"name\":\"gpt-4o\",\"version\":\"2024-08-06\"}}}]"
},
"target": "https://X.azure-api.net/openai-v1",
}

Since I’m using the v1 Azure OpenAI API, I don’t need to specify an inferenceAPIVersion. If I was using the classic API I’d need to specify the version (such as 2025-04-01-preview). Notice also I have set deploymentInPath to false. When set to true the connection will add the /deployments/deployment_name to the path. For the v1 API this isn’t required. Finally you got the models property. With a static model setup I list out the models I’m exposing to the connection. If you’re using Terraform, you MUST wrap the models in the jsonecode function. If you don’t, it will not work. The static model option is pretty helpful if you want to strictly control exactly what models the project is getting access to.

Let’s now switch over to dynamic discovery. Dynamic discovery requires you define a few additional operations inside of your API. The details can be found in this GitHub repo, but the basics of is you define an operation for a GET on a specific model and a LIST to find all the models available. These operations are management plane operations at the ARM API to retrieve deployment information. Here is an example of a setup with dynamic discovery using an APIM connection.

{
"id": "/subscriptions/X/resourceGroups/X/providers/Microsoft.CognitiveServices/accounts/X/projects/sampleproject1/connections/conn1apimgwdynamicopenai-v1",
"location": null,
"name": "conn1apimgwdynamicopenai-v1",
"properties": {
"audience": "https://cognitiveservices.azure.com",
"authType": "ProjectManagedIdentity",
"category": "ApiManagement",
"group": "AzureAI",
"isSharedToAll": false,
"metadata": {
"deploymentAPIVersion": "2024-10-01",
"deploymentInPath": "false",
"inferenceAPIVersion": null
},
"target": "https://X.azure-api.net/openai-v1",
},
"type": "Microsoft.CognitiveServices/accounts/projects/connections"
}

When doing the dynamic discovery, you’ll see the deploymentAPIVersion property set to the API version for the GET and LIST deployment operations of the ARM REST API. I added these operations into the API after I imported the v1 OpenAI spec. You can see an example in Terraform I put together in my lab repo. Dynamic discovery is a great solution when you want to the developer to have access to any new deployments you may push to the Foundry resources.

I’m not going to run through the ModelGateway connection categories because they will largely emulate what you see above with some minor differences. The official Foundry samples GitHub repo has the gory details. I also have examples in Terraform available in my own repo (if you dare subject yourself to reading my code).

Ok, so now you understand the basics of setting up the connection and what you need to do on the APIM side. For more details on setting up APIM you can reference this official repo.

Summing It Up

Ok, so you now you understand the basic connection object, how to set it up, and how it works. I’m going to cut it here and continue in another post where I’ll dig into the dirty details of how it looks to use this because I don’t want to overload your brain (and mine) with a super long post.

Before I jet I will want to provide some critical resources:

  1. My AMAZING peer Piotr Karpala has put together a repository with examples of this pattern (and some 3rd-party integrations) with Bicep. The stuff in there is gold. He was also my late night buddy helping me work through the quirks of this integration late at night. Couldn’t have gotten it done without him (or at least would have broken many keyboards).
  2. The Product Group’s official samples and explanations of the setup are located here. I’d highly recommending referencing them because they will always have more up to date instructions than my blog.
  3. I’ve put together some Terraform samples for my own purposes which are you welcome to reference, loot for your own means, and laugh at my pathetic coding ability. Check out this one for the Foundry portion and this one for the APIM portion.

And here are your tips for this post:

  1. RTFM. Seriously, read the official documentation. Today, this integration is challenging to put in place. If you try to lone wolf it, let me know how many keyboards end up being thrown through your window.
  2. If you’re coding in Terraform or making REST calls to create these connections, remember CASE SENSITIVITY matters. If you do the wrong case sensitivity, the resource will still create but it won’t work. You’ll get very frustrated trying to troubleshoot it.
  3. If you’re coding in Terraform don’t forget to use the jsonencode function on the models property. If you skip that, the resource will create but shit will not work.
  4. This is only supported for prompt agents today.
  5. Don’t forget this is public preview. So test it, but expect things to change and don’t throw this into production.

In the next post I’ll walk through how you can test the integration, some of the quirks and considerations for identity and authentication, and some of the neat APIM policy you can craft given some of the new information that is sent in the request.

See you next post!

Microsoft Foundry – BYO AI Gateway – Part 1

Microsoft Foundry – BYO AI Gateway – Part 1

Hello again fellow geeks!

Today I’m going to continue my series on Microsoft Foundry by covering a really cool new feature that dropped into public preview recently. This new feature allows you to connect Foundry to a first or third-party AI Gateway (BYO AI Gateway is a more appropriate explanation of this feature). This AI Gateway could be API Management or it could be a third-party solution. Yes folks, this means agents built in the Foundry Agent Service (which I will be referring to as Foundry-native agents) can have the requests from the agents to the models forced through AI Gateway where you can incorporate additional governance and visibility vs hitting models directly deployed in the same Foundry resource. Before I dive into the details, let me clear up some confusion that has been popping up in my customer base.

Microsoft Foundry Resources vs Microsoft Foundry Hubs (FKA AI Foundry Hubs FKA AI Studio Hubs)

In my last past post, I walked through some of the history of Foundry and how it got to where it is today. If you want the full gory details, read the post. For this post, I’m going to provide a very abridged version of that post. When I refer to a Foundry Resource (which I will also refer to as Foundry account and you’ll see the docs sometimes referred to as Foundry Projects) I’m referring to the new top-level Azure resource that sits under the Cognitive Services resource provider. This resource inherits the basic framework you’d see in a Cognitive Services account with additional capabilities to supports child logical containers called projects, which are largely used to support the Foundry Agent Service. This is what I refer to as Stage 3 for Foundry and should be the resource you create today for any use cases you would have historically built an Azure OpenAI Service resource or Foundry Hub resource.

Foundry Hubs, which I refer to as Stage 2 for Foundry, are top-level resources under the Machine Learning resource provider. The service was essentially a light overlay on top of AML (Azure Machine Learning) workspaces and came with all the complexity that AML came with. While Foundry Hubs support the Agent Service in preview, the product isn’t going to see any further development to my understanding. You shouldn’t be created new Foundry Hubs right now and you should be preparing migration to understand what you won’t get with the new Foundry resource (basically no Prompt Flow (going bye bye) and no Hugging Face models (yet)). You should instead be focusing on Foundry resources.

Foundry Evolutionary Stages

Awesome, we should be level set now that all the functionality I’m talking about is Stage 3 Foundry resource capabilities.

What’s an AI Gateway again?

Next you might be asking, “WTF is an AI Gateway”. Every vendor has an explanation, and since I work for Microsoft today, I’m gonna direct you to their overview. With my corporate duty fulfilled, I’ll give you the generic explanation. An AI Gateway is an architectural component that you place between an application or agent runtime and the LLM to establish governance of the models, create visibility into the use of the LLMs, and improve security posture and optimize operations. Now you’re likely saying, “WTF Matt, that sounds like ivory tower shit.” To break it down even further, it’s simply a rebranded API Gateway with additional functionality and features catered around the challenges that get introduced when consuming LLMs.

An AI Gateway will provide features like those pictured below:

In the above image we see the AI Gateway sitting between the applications and agent runtime and the LLMs. By mediating and controlling this connection we can do some cool stuff. This includes swapping authentication contexts, doing fine grained authorization, load balancing across multiple instances of an LLM to maximize token capacity, caching responses to reduce costs and improve speeds, control how many tokens a specific app/agent can consume, routing requests to specific models based upon cost or speed, using the gateway as an MCP Server to front tools both internal and external to your environment, or getting more visibility into who is consuming what and how much they’re consuming for chargebacks to specific business units in an enterprise. You’ll likely want and need to start offering models at a enterprise level akin to other centralized services you may be providing like authentication services, DNS, and the like (yeah it will be that core to your BUs moving forward).

The problem this feature solves

Historically, native agents built within the Foundry Agent Service were built such that the agents could only consume models in the same Foundry resource (yeah yeah, I’m aware of the external OpenAI Service connection, but that workaround wasn’t built to solve this problem). This presented an issue where if an enterprise wanted to insert an AI Gateway between these agents and the AI Gateway, they were blocked from using the Foundry Agent Service. For agents running on customer-managed compute on-premises, in AWS, or in Azure (I’ll refer to these as external agents) this wasn’t a problem, but that scenario forces you to manage the compute and (today at least) you don’t get access to some of the Foundry agent tools such as Grounding For Bing Search. The managed compute and access to these tools has made the Foundry Agent Service appealing, but the lack of support for inserting an AI Gateway into the flow was one of the limitations that pushed customers the external agent direction vs Foundry-native agents.

AI Gateway today

What’s a bit confusing is Microsoft introduced a feature called AI Gateway in Microsoft Foundry back at Ignite in November 2025. I like to refer to this as a (kinda) “managed AI Gateway”. I don’t have a ton of data points on it, because I’ve only played with it a small bit and none of my customer base is using it. While the pitches may read the same, the architecture differs. The managed AI Gateway has a tighter coupling with the Foundry resource and provides a limited set of features (such as token throttling) vs the BYO AI Gateway feature which has a ton more flexibility. A good example is the Foundry resource and APIM it provisions (it uses this as the AI Gateway) need to be in the same subscription. I’m sure the managed AI Gateway offerings has its use cases, but I like the more decoupled and feature rich approach of the BYO AI Gateway. The managed AI Gateway feature is something to watch though, when it becomes a more “managed” (aka APIM doesn’t get deployed to customer sub) that you can swap on with a toggle and get some of the basic controls (like token limits via agent or project quotas) it will become very appealing to customers with basic requirements that don’t need a more complex AI Gateway solution.

I’m a fan of the more decoupled approach the BYO AI Gateway takes because I can completely separate the Foundry resources which hold the agents from the Foundry resources that have model deployments. These Foundry resources can be placed in separate subscriptions to create a separate security and compliance blast radius and be managed by completely separate teams. For me, this makes a lot more sense because I’m a huge believer in generative AI models becoming a core service Central IT governs and provides to the enterprise. With this pattern you can establish that level of separation and centralized control.

BYO AI Gateway

In the image above you can see I now have my Foundry resource deployments for my BUs which aren’t deployed with any model deployments. All models are deployed to my Central IT-managed Foundry resources which sit behind an AI Gateway. This gives me a TON of power to insert the governance, improve visibility, strengthen my security posture, and optimize operations.

Another added benefit of the BYO AI Gateway is I’m not limited to API Management like I am with the managed AI Gateway offering. I can use whatever product I want to use as an AI Gateway like a Kong, LiteLLM, Apigee, AWS API Gateway, or even a custom built gateway.

Wrapping It Up

I was originally going to make one super mega post which had this overview and all the in-the-weeds stuff. I figured that would simply be too much (for you and me) so instead I’ll be breaking this into three posts. This post got you familiar with why this feature is so important to complex enterprises. You should be planning your larger strategy to consider this feature if you are POCing and designing for eventual support of the Foundry Agent Service.

In my next posts I’m going to dive deep in the weeds walking through how this thing works behind the scenes and how to set it up. Many painful nights were spent getting this thing spun up when it was in Private Preview and the documentation for the feature is still evolving, so I’m hoping that deep dive will get you mucking with this feature sooner rather than later.

Thanks folks!

Microsoft Foundry – The Evolution (Revisited)

Microsoft Foundry – The Evolution (Revisited)

Hi folks! In the past I did a series on the Azure OpenAI Service and Microsoft Foundry Hubs (FKA AI Foundry Hubs FKA AI Studio). Instead of going through and updating all those posts and losing the historical content and context (I don’t know about you, but I love have the historical context of a service) I’m instead going to preserve it as is and spin up a new series on the latest iteration of Microsoft Foundry. I’ll likely keep much of the general framework of the older series because it seemed to work. One additional piece I’ll be included in this series is some of the quirks of the service I’ve run into to potentially save you pain from having to troubleshoot it. For this first post, I’m going to start this off explaining how the service has involved. As always, my persona focus here is my fellow folks in the central IT and infrastructure space.

The history

Way back in 2023 the hype behind generative AI really started go insane. Microsoft managed to negotiate rights to host OpenAI’s models in Azure and introduced the Azure OpenAI Service. The demand across customers was insane where every business unit (BU) wanted it yesterday. Microsoft initially offered the service within the Cognitive Services framework under the Cognitive Services resource provider. This mean it inherited many of the controls native to Cognitive Services which included Private Endpoints, a limited set of outbound controls, support for API key and Entra ID authentication, and support for Azure RBAC for authorization. Getting the deployed was pretty straightforward with the hold-ups to deployment being more concerns about LLM security in general. Deployment typically looked like the architecture below.

Azure OpenAI Service

As folks started to build their AI applications, they tapped into other services under the Cognitive Services umbrella like Content Safety, Speech-to-Text, and the like. These services fit in nicely as they also fell under the Cognitive Services umbrella and had a similar architecture as the above, requiring deployment of the resource and the typical private endpoint and authentication/authorization (authN/authZ) configuration.

I like to think of this as stage 1 of the Microsoft’s AI offerings.

Microsoft then wanted to offer more models, including models they have built such and Phi and third-party models such as Mistral. This drove them to create a new resource called an AI Service resource. This resource fell under the Cognitive Services resource provider, and again inherited similar architectures as above. Beyond hosting third-party models, it also included and endpoint to consume OpenAI models and some of the pool of Cognitive Services. This is where we begin to see the collapse of Microsoft’s AI Services under a single top-level resource.

What about building AI apps though? This is where Foundry Hubs (FKA AI Studio) were introduced. The intent of Foundry Hubs were to be the one stop shop for developers to create their AI Apps. Here developers could experiment with LLMs using the playgrounds, build AI apps with Prompt Flow, build agents, or deploy 3rd party LLMs for Hugging Face. Foundry Hubs were a light overlay on top of the Azure Machine Learning (AML) service utilizing a new feature of AML built specifically for Foundry called AML Hubs. Foundry Hubs inherited a number of capabilities of AML such as its managed compute (to host 3rd party models and run prompt flows) and its managed virtual network (to host the managed compute).

Microsoft Foundry Hubs

While this worked, anyone who has built a secure AML deployment knows that shit ain’t easy. Getting the service working requires extensive knowledge of how its identity and networking configuration. This was a pain point for many customers in my experience. Many struggled to get it up and running due to the complexity.

Example of complexity of Microsoft Foundry IAM model

I think of the combination of AI Services and Microsoft Foundry Hubs as stage 2 of Microsoft’s journey.

Ok, shit was complicated, I ain’t gonna lie. Given this complexity and feedback from the customers, Microsoft got ambitious and decided to further consolidate and simplify. This introduced the concept of a new top-level resource called Microsoft Foundry Accounts. In public documentation and conversation this may be referred to as Foundry Projects or Foundry Resources. Since this is my blog I’m going to use my term which is Microsoft Foundry Accounts. With Microsoft Foundry accounts, Microsoft collapsed the AI Services and Foundry Hubs into a single top level resource. Not only did they consolidate these two resources, they also shifted Foundry Hubs from the Azure Machine Learning resource provider into the Cognitive Services resource provider. This move consolidated the Cognitive Services resource provider as the “AI” resource provider in my brain. It resulted in a new architecture which often looks something like the below.

Microsoft Foundry Accounts common architecture

This is what I like to refer to as stage 3, which is the current stage we are in with Microsoft’s AI offerings. We will continue to see this stage evolve which more features build and integrated into the Microsoft Foundry Account. I wouldn’t be surprised at all to see other services collapse into it as just another endpoint to a the singular resource.

Why do you care?

You might be asking, “Matt, why the hell do I care about this?” The reason you should care is because there are many customers who jumped into these products at different stages. I run across a ton of customers still playing in Foundry Hubs with only a vague understanding that Foundry Hubs are an earlier stage and they should begin transitioning to stage 3. This evolution is also helpful to understand because it gives an idea of the direction Microsoft is taking its generative AI services, which is key to how you should be planning you future of these services within Azure.

I’ll dive into far more detail in future posts about stage 3. I’ll share some of my learnings (and my many pains), some reference architectures that I’ve seen work, how I’ve seen customers successfully secure and scale usage of Foundry Accounts.

For now, I leave you with this evolution diagram I like to share with customers. For me, it really helps land the stages and the evolution, what is old and what is new, and what services I need to think about focusing on and which I should think about migrating off of.

Foundry evolution

Well folks, that wraps it up. Your takeaways today are:

  1. Assess which stage your implementation of generative AI is right now in Azure.
  2. Begin plans to migrate to stage 3 if you haven’t already. Know that there will be gaps in functionality with Foundry Hubs and Foundry Accounts. A good example is no more prompt flow. There are others, but many will eventually land in Foundry project.

See you next post!