diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index 422c6f540..ffbee5297 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -38,10 +38,42 @@ ***** xref:ai-agents:mcp/remote/manage-servers.adoc[Manage Servers] ***** xref:ai-agents:mcp/remote/scale-resources.adoc[Scale Resources] ***** xref:ai-agents:mcp/remote/monitor-activity.adoc[Monitor Activity] +**** xref:ai-agents:mcp/remote/pipeline-patterns.adoc[MCP Server Patterns] *** xref:ai-agents:mcp/local/index.adoc[Redpanda Cloud Management MCP Server] **** xref:ai-agents:mcp/local/overview.adoc[Overview] **** xref:ai-agents:mcp/local/quickstart.adoc[Quickstart] **** xref:ai-agents:mcp/local/configuration.adoc[Configure] +** xref:ai-agents:ai-gateway/index.adoc[AI Gateway] +*** xref:ai-agents:ai-gateway/what-is-ai-gateway.adoc[Overview] +*** xref:ai-agents:ai-gateway/gateway-quickstart.adoc[Quickstart] +*** xref:ai-agents:ai-gateway/gateway-architecture.adoc[Architecture] +*** For Administrators +**** xref:ai-agents:ai-gateway/admin/setup-guide.adoc[Setup Guide] +*** For Builders +**** xref:ai-agents:ai-gateway/builders/discover-gateways.adoc[Discover Gateways] +**** xref:ai-agents:ai-gateway/builders/connect-your-agent.adoc[Connect Your Agent] +**** xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[CEL Routing Patterns] +**** xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[MCP Gateway] +//*** Observability +//**** xref:ai-agents:ai-gateway/observability-logs.adoc[Request Logs] +//**** xref:ai-agents:ai-gateway/observability-metrics.adoc[Metrics and Analytics] +//*** xref:ai-agents:ai-gateway/migration-guide.adoc[Migrate] +//*** xref:ai-agents:ai-gateway/integrations/index.adoc[Integrations] +//**** Claude Code +//***** xref:ai-agents:ai-gateway/integrations/claude-code-admin.adoc[Admin Guide] +//***** xref:ai-agents:ai-gateway/integrations/claude-code-user.adoc[User Guide] +//**** Cline +//***** xref:ai-agents:ai-gateway/integrations/cline-admin.adoc[Admin Guide] +//***** xref:ai-agents:ai-gateway/integrations/cline-user.adoc[User Guide] +//**** Continue.dev +//***** xref:ai-agents:ai-gateway/integrations/continue-admin.adoc[Admin Guide] +//***** xref:ai-agents:ai-gateway/integrations/continue-user.adoc[User Guide] +//**** Cursor IDE +//***** xref:ai-agents:ai-gateway/integrations/cursor-admin.adoc[Admin Guide] +//***** xref:ai-agents:ai-gateway/integrations/cursor-user.adoc[User Guide] +//**** GitHub Copilot +//***** xref:ai-agents:ai-gateway/integrations/github-copilot-admin.adoc[Admin Guide] +//***** xref:ai-agents:ai-gateway/integrations/github-copilot-user.adoc[User Guide] * xref:develop:connect/about.adoc[Redpanda Connect] ** xref:develop:connect/connect-quickstart.adoc[Quickstart] diff --git a/modules/ai-agents/pages/ai-gateway/admin/setup-guide.adoc b/modules/ai-agents/pages/ai-gateway/admin/setup-guide.adoc new file mode 100644 index 000000000..155b80e01 --- /dev/null +++ b/modules/ai-agents/pages/ai-gateway/admin/setup-guide.adoc @@ -0,0 +1,396 @@ += AI Gateway Setup Guide +:description: Set up AI Gateway for your organization. Enable providers, configure failover for high availability, set budget controls, and create gateways with team-level isolation. +:page-topic-type: how-to +:personas: platform_admin +:learning-objective-1: Enable LLM providers and models in the catalog +:learning-objective-2: Create and configure gateways with routing policies, rate limits, and spend limits +:learning-objective-3: Set up MCP tool aggregation for AI agents + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +This guide walks administrators through the setup process for AI Gateway, from enabling LLM providers to configuring routing policies and MCP tool aggregation. + +After completing this guide, you will be able to: + +* [ ] Enable LLM providers and models in the catalog +* [ ] Create and configure gateways with routing policies, rate limits, and spend limits +* [ ] Set up MCP tool aggregation for AI agents + +== Prerequisites + +* Access to the Redpanda Cloud Console with administrator privileges +* API keys for at least one LLM provider (OpenAI, Anthropic, Google AI) +* (Optional) MCP server endpoints if you plan to use tool aggregation + +== Enable a provider + +Providers represent upstream services (Anthropic, OpenAI, Google AI) and associated credentials. Providers are disabled by default and must be enabled explicitly by an administrator. + +. In the Redpanda Cloud Console, navigate to *Agentic AI* → *Providers*. +. Select a provider (for example, Anthropic). +. On the Configuration tab for the provider, click *Add configuration*. +. Enter your API Key for the provider. ++ +TIP: Store provider API keys securely. Each provider configuration can have multiple API keys for rotation and redundancy. + +. Click *Save* to enable the provider. + +Repeat this process for each LLM provider you want to make available through AI Gateway. + +== Enable models + +The model catalog is the set of models made available through the gateway. Models are disabled by default. After enabling a provider, you can enable its models. + +The infrastructure that serves the model differs based on the provider you select. For example, OpenAI has different reliability and availability metrics than Anthropic. When you consider all metrics, you can design your gateway to use different providers for different use cases. + +. Navigate to *Agentic AI* → *Models*. +. Review the list of available models from enabled providers. +. For each model you want to expose through gateways, toggle it to *Enabled*. For example: ++ +-- +* `openai/gpt-5.2` +* `openai/gpt-5.2-mini` +* `anthropic/claude-sonnet-4.5` +* `anthropic/claude-opus-4.6` +-- + +. Click *Save changes*. + +Only enabled models will be accessible through gateways. You can enable or disable models at any time without affecting existing gateways. + +=== Model naming convention + +Model requests must use the `vendor/model_id` format in the model property of the request body. This format allows AI Gateway to route requests to the appropriate provider. For example: + +* `openai/gpt-5.2` +* `anthropic/claude-sonnet-4.5` +* `openai/gpt-5.2-mini` + +ifdef::ai-hub-available[] +== Choose a gateway mode + +Before creating a gateway, decide which mode fits your needs. + +*AI Hub Mode* is ideal when you: + +* Want to minimize configuration complexity +* Need to quickly enable LLM access for multiple teams +* Want pre-configured intelligent routing with automatic provider failover +* Are satisfied with managed routing rules and backend pools (17 pre-configured rules) +* Need only basic customization (provider credentials, 6 preference toggles) +* Use OpenAI and/or Anthropic providers + +*Custom Mode* is ideal when you: + +* Need custom routing rules based on specific business logic +* Require full control over backend pool configuration +* Want to implement custom failover strategies +* Need to integrate with custom infrastructure (Azure OpenAI, AWS Bedrock, other providers) +* Have specialized requirements not covered by AI Hub's pre-configured rules + +[TIP] +==== +You can start with AI Hub mode and later eject to Custom mode if you need more control. Ejection is a one-way transition. See xref:ai-gateway/admin/eject-to-custom-mode.adoc[]. +==== + +For detailed comparison, see xref:ai-gateway/gateway-modes.adoc[]. + +*Next sections:* + +* *AI Hub Mode*: See xref:ai-gateway/admin/configure-ai-hub.adoc[] for setup instructions +* *Custom Mode*: Continue with "Create a gateway" below for manual configuration +endif::[] + +== Create a gateway + +A gateway is a logical configuration boundary (policies + routing + observability) on top of a single deployment. It's a "virtual gateway" that you can create per team, environment (staging/production), product, or customer. + +. Navigate to *Agentic AI* → *Gateways*. +. Click *Create Gateway*. +. Configure the gateway: ++ +-- +* *Name*: Choose a descriptive name (for example, `production-gateway`, `team-ml-gateway`, `staging-gateway`) +* *Workspace*: Select the workspace this gateway belongs to ++ +TIP: A workspace is conceptually similar to a resource group in Redpanda streaming. ++ +* *Description* (optional): Add context about this gateway's purpose +* *Tags* (optional): Add metadata for organization and filtering +-- + +. Click *Create*. + +. After creation, note the following information: ++ +-- +* *Gateway endpoint*: URL for API requests (for example, `https://example/gateways/d633lffcc16s73ct95mg/v1`) ++ +The gateway ID is embedded in the URL. +-- + +You'll share the gateway endpoint with users who need to access this gateway. + +== Configure LLM routing + +On the gateway details page, select the *LLM* tab to configure rate limits, spend limits, routing, and provider pools with fallback options. + +The LLM routing pipeline visually represents the request lifecycle: + +. *Rate Limit*: Global rate limit (for example, 100 requests/second) +. *Spend Limit / Monthly Budget*: Monthly budget with blocking enforcement (for example, $15K/month) +. *Routing*: Primary provider pool with optional fallback provider pools + +=== Configure rate limits + +Rate limits control how many requests can be processed within a time window. + +. In the *LLM* tab, locate the *Rate Limit* section. +. Click *Add rate limit*. +. Configure the limit: ++ +-- +* *Requests per second*: Maximum requests per second (for example, `100`) +* *Burst allowance* (optional): Allow temporary bursts above the limit +-- + +. Click *Save*. + +Rate limits apply to all requests through this gateway, regardless of model or provider. + +=== Configure spend limits and budgets + +Spend limits prevent runaway costs by blocking requests after a monthly budget is exceeded. + +. In the *LLM* tab, locate the *Spend Limit* section. +. Click *Configure budget*. +. Set the budget: ++ +-- +* *Monthly budget*: Maximum spend per month (for example, `$15000`) +* *Enforcement*: Choose *Block* to reject requests after the budget is exceeded, or *Alert* to notify but allow requests +* *Notification threshold* (optional): Alert when X% of budget is consumed (for example, `80%`) +-- + +. Click *Save*. + +Budget tracking uses estimated costs based on token usage and public provider pricing. + +=== Configure routing and provider pools + +Provider pools define which LLM providers handle requests, with support for primary and fallback configurations. + +. In the *LLM* tab, locate the *Routing* section. +. Click *Add provider pool*. +. Configure the primary pool: ++ +-- +* *Name*: For example, `primary-anthropic` +* *Providers*: Select one or more providers (for example, Anthropic) +* *Models*: Choose which models to include (for example, `anthropic/claude-sonnet-4.5`) +* *Load balancing*: If multiple providers are selected, choose distribution strategy (round-robin, weighted, etc.) +-- + +. (Optional) Click *Add fallback pool* to configure automatic failover: ++ +-- +* *Name*: For example, `fallback-openai` +* *Providers*: Select fallback provider (for example, OpenAI) +* *Models*: Choose fallback models (for example, `openai/gpt-5.2`) +* *Trigger conditions*: When to activate fallback: + ** Rate limit exceeded (429 from primary) + ** Timeout (primary provider slow) + ** Server errors (5xx from primary) +-- + +. Configure routing rules using CEL expressions (optional): ++ +For simple routing, select *Route all requests to primary pool*. ++ +For advanced routing based on request properties, use CEL expressions. See xref:ai-gateway/cel-routing-cookbook.adoc[] for examples. ++ +Example CEL expression for tier-based routing: ++ +[source,cel] +---- +request.headers["x-user-tier"] == "premium" + ? "anthropic/claude-opus-4.6" + : "anthropic/claude-sonnet-4.5" +---- + +. Click *Save routing configuration*. + +TIP: Provider pool (UI) = Backend pool (API) + +=== Load balancing and multi-provider distribution + +If a provider pool contains multiple providers, you can distribute traffic to balance load or optimize for cost/performance: + +* Round-robin: Distribute evenly across all providers +* Weighted: Assign weights (for example, 80% to Anthropic, 20% to OpenAI) +* Least latency: Route to fastest provider based on recent performance +* Cost-optimized: Route to cheapest provider for each model + +== Configure MCP tools (optional) + +If your users will build glossterm:AI agent[,AI agents] that need access to glossterm:MCP tool[,tools] via glossterm:MCP[,Model Context Protocol (MCP)], configure MCP tool aggregation. + +On the gateway details page, select the *MCP* tab to configure tool discovery and execution. The MCP proxy aggregates multiple glossterm:MCP server[,MCP servers], allowing agents to find and call tools through a single endpoint. + +=== Configure MCP rate limits + +Rate limits for MCP work the same way as LLM rate limits. + +. In the *MCP* tab, locate the *Rate Limit* section. +. Click *Add rate limit*. +. Configure the maximum requests per second and optional burst allowance. +. Click *Save*. + +=== Add MCP servers + +. In the *MCP* tab, click *Create MCP Server*. +. Configure the server: ++ +-- +* *Server ID*: Unique identifier for this server +* *Display Name*: Human-readable name (for example, `database-server`, `slack-server`) +* *Server Address*: Endpoint URL for the MCP server (for example, `https://mcp-database.example.com`) +-- + +. Configure server settings: ++ +-- +* *Timeout (seconds)*: Maximum time to wait for a response from this server +* *Enabled*: Whether this server is active and accepting requests +* *Defer Loading Override*: Controls whether tools from this server are loaded upfront or on demand ++ +[cols="1,2"] +|=== +|Option |Description + +|Inherit from gateway +|Use the gateway-level deferred loading setting (default) + +|Enabled +|Always defer loading from this server. Agents receive only a search tool initially and query for specific tools when needed. This can reduce token usage by 80-90%. + +|Disabled +|Always load all tools from this server upfront. +|=== + +* *Forward OIDC Token Override*: Controls whether the client's OIDC token is forwarded to this MCP server ++ +[cols="1,2"] +|=== +|Option |Description + +|Inherit from gateway +|Use the gateway-level OIDC forwarding setting (default) + +|Enabled +|Always forward the OIDC token to this server + +|Disabled +|Never forward the OIDC token to this server +|=== +-- + +. Click *Save* to add the server to this gateway. + +Repeat for each MCP server you want to aggregate. + +See xref:ai-gateway/mcp-aggregation-guide.adoc[] for detailed information about MCP aggregation. + +=== Configure the MCP orchestrator + +The MCP orchestrator is a built-in MCP server that enables programmatic tool calling. Agents can generate JavaScript code to call multiple tools in a single orchestrated step, reducing the number of round trips. + +Example: A workflow requiring 47 file reads can be reduced from 49 round trips to just 1 round trip using the orchestrator. + +The orchestrator is pre-configured when you initialize the MCP gateway. Its server configuration (Server ID, Display Name, Transport, Command, and Timeout) is system-managed and cannot be modified. + +You can configure blocked tool patterns to prevent specific tools from being called through the orchestrator: + +. In the *MCP* tab, select the orchestrator server to edit it. +. Under *Blocked Tools*, click *Add Pattern* to add glob patterns for tools that should be blocked from execution. ++ +Example patterns: ++ +-- +* `server_id:*` - Block all tools from a specific server +* `*:dangerous_tool` - Block a specific tool across all servers +* `specific:tool` - Block a single tool on a specific server +-- ++ +NOTE: The orchestrator's own tools are blocked by default to prevent recursive execution. + +. Click *Save*. + +== Verify your setup + +After completing the setup, verify that the gateway is working correctly: + +=== Test the gateway endpoint + +[source,bash] +---- +curl ${GATEWAY_ENDPOINT}/models \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" +---- + +Expected result: List of enabled models. + +=== Send a test request + +[source,bash] +---- +curl ${GATEWAY_ENDPOINT}/chat/completions \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + -H "Content-Type: application/json" \ + -d '{ + "model": "openai/gpt-5.2-mini", + "messages": [{"role": "user", "content": "Hello, AI Gateway!"}], + "max_tokens": 50 + }' +---- + +Expected result: Successful completion response. + +=== Check the gateway overview + +. Navigate to *Gateways* → Select your gateway → *Overview*. +. Check the aggregate metrics to verify your test request was processed: ++ +-- +* Total Requests: Should have incremented +* Total Tokens: Should show tokens consumed +* Total Cost: Should show estimated cost +-- + +== Share access with users + +Now that your gateway is configured, share access with users (builders): + +. Provide the *Gateway Endpoint* (for example, `https://example/gateways/gw_abc123/v1`) +. Share API credentials (Redpanda Cloud tokens with appropriate permissions) +. (Optional) Document available models and any routing policies +. (Optional) Share rate limits and budget information + +Users can then discover and connect to the gateway using the information provided. See xref:ai-gateway/builders/discover-gateways.adoc[] for user documentation. + +== Next steps + +*Configure and optimize:* + +// * xref:ai-gateway/admin/manage-gateways.adoc[Manage Gateways] - List, edit, and delete gateways +* xref:ai-gateway/cel-routing-cookbook.adoc[CEL Routing Cookbook] - Advanced routing patterns +// * xref:ai-gateway/admin/networking-configuration.adoc[Networking Configuration] - Configure private endpoints and connectivity + +//*Monitor and observe:* +// + +ifdef::integrations-available[] +*Integrate tools:* + +* xref:ai-gateway/integrations/index.adoc[Integrations] - Admin guides for Claude Code, Cursor, and other tools +endif::[] diff --git a/modules/ai-agents/pages/ai-gateway/builders/connect-your-agent.adoc b/modules/ai-agents/pages/ai-gateway/builders/connect-your-agent.adoc new file mode 100644 index 000000000..b485d4f32 --- /dev/null +++ b/modules/ai-agents/pages/ai-gateway/builders/connect-your-agent.adoc @@ -0,0 +1,546 @@ += Connect Your Agent +:description: Integrate your AI agent or application with Redpanda Agentic Data Plan for unified LLM access. +:page-topic-type: how-to +:personas: app_developer +:learning-objective-1: Configure your application to use AI Gateway with OpenAI-compatible SDKs +:learning-objective-2: Make LLM requests through the gateway and handle responses appropriately +:learning-objective-3: Validate your integration end-to-end + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +This guide shows you how to connect your glossterm:AI agent[] or application to Redpanda Agentic Data Plan. This is also called "Bring Your Own Agent" (BYOA). You'll configure your client SDK, make your first request, and validate the integration. + +After completing this guide, you will be able to: + +* [ ] Configure your application to use AI Gateway with OpenAI-compatible SDKs +* [ ] Make LLM requests through the gateway and handle responses appropriately +* [ ] Validate your integration end-to-end + +== Prerequisites + +* You have discovered an available gateway and noted its Gateway ID and endpoint. ++ +If not, see xref:ai-gateway/builders/discover-gateways.adoc[]. + +* You have a Redpanda Cloud API token with access to the gateway. +* You have a development environment with your chosen programming language. + +== Integration overview + +Connecting to AI Gateway requires two configuration changes: + +. *Change the base URL*: Point to the gateway endpoint instead of the provider's API. The gateway ID is embedded in the endpoint URL. +. *Add authentication*: Use your Redpanda Cloud token instead of provider API keys + +== Quickstart + +=== Environment variables + +Set these environment variables for consistent configuration: + +[source,bash] +---- +export REDPANDA_GATEWAY_URL="" +export REDPANDA_API_KEY="your-redpanda-cloud-token" +---- + +Replace with your actual gateway endpoint and API token. + +[tabs] +==== +Python (OpenAI SDK):: ++ +[source,python] +---- +import os +from openai import OpenAI + +# Configure client to use AI Gateway +client = OpenAI( + base_url=os.getenv("REDPANDA_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_API_KEY"), +) + +# Make a request (same as before) +response = client.chat.completions.create( + model="openai/gpt-5.2-mini", # Note: vendor/model_id format + messages=[{"role": "user", "content": "Hello, AI Gateway!"}], + max_tokens=100 +) + +print(response.choices[0].message.content) +---- + +Python (Anthropic SDK):: ++ +The Anthropic SDK can also route through AI Gateway using the OpenAI-compatible endpoint: ++ +[source,python] +---- +import os +from anthropic import Anthropic + +client = Anthropic( + base_url=os.getenv("REDPANDA_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_API_KEY"), +) + +# Make a request +message = client.messages.create( + model="anthropic/claude-sonnet-4.5", + max_tokens=100, + messages=[{"role": "user", "content": "Hello, AI Gateway!"}] +) + +print(message.content[0].text) +---- + +Node.js (OpenAI SDK):: ++ +[source,javascript] +---- +import OpenAI from 'openai'; + +const openai = new OpenAI({ + baseURL: process.env.REDPANDA_GATEWAY_URL, + apiKey: process.env.REDPANDA_API_KEY, +}); + +// Make a request +const response = await openai.chat.completions.create({ + model: 'openai/gpt-5.2-mini', + messages: [{ role: 'user', content: 'Hello, AI Gateway!' }], + max_tokens: 100 +}); + +console.log(response.choices[0].message.content); +---- + +cURL:: ++ +For testing or shell scripts: ++ +[source,bash] +---- +curl ${REDPANDA_GATEWAY_URL}/chat/completions \ + -H "Authorization: Bearer ${REDPANDA_API_KEY}" \ + -H "Content-Type: application/json" \ + -d '{ + "model": "openai/gpt-5.2-mini", + "messages": [{"role": "user", "content": "Hello, AI Gateway!"}], + "max_tokens": 100 + }' +---- +==== + +== Model naming convention + +When making requests through AI Gateway, use the `vendor/model_id` format for the model parameter: + +* `openai/gpt-5.2` +* `openai/gpt-5.2-mini` +* `anthropic/claude-sonnet-4.5` +* `anthropic/claude-opus-4.6` + +This format tells AI Gateway which provider to route the request to. For example: + +[source,python] +---- +# Route to OpenAI +response = client.chat.completions.create( + model="openai/gpt-5.2", + messages=[...] +) + +# Route to Anthropic (same client, different model) +response = client.chat.completions.create( + model="anthropic/claude-sonnet-4.5", + messages=[...] +) +---- + +// To see which models are available in your gateway, see xref:ai-gateway/builders/available-models.adoc[]. + +== Handle responses + +Responses from AI Gateway follow the OpenAI API format: + +[source,python] +---- +response = client.chat.completions.create( + model="openai/gpt-5.2-mini", + messages=[{"role": "user", "content": "Explain AI Gateway"}], + max_tokens=200 +) + +# Access the response +message_content = response.choices[0].message.content +finish_reason = response.choices[0].finish_reason # 'stop', 'length', etc. + +# Token usage +prompt_tokens = response.usage.prompt_tokens +completion_tokens = response.usage.completion_tokens +total_tokens = response.usage.total_tokens + +print(f"Response: {message_content}") +print(f"Tokens: {prompt_tokens} prompt + {completion_tokens} completion = {total_tokens} total") +---- + +== Handle errors + +AI Gateway returns standard HTTP status codes: + +[source,python] +---- +from openai import OpenAI, OpenAIError + +client = OpenAI( + base_url=os.getenv("REDPANDA_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_API_KEY"), +) + +try: + response = client.chat.completions.create( + model="openai/gpt-5.2-mini", + messages=[{"role": "user", "content": "Hello"}] + ) + print(response.choices[0].message.content) + +except OpenAIError as e: + if e.status_code == 400: + print("Bad request - check model name and parameters") + elif e.status_code == 401: + print("Authentication failed - check API token") + elif e.status_code == 404: + print("Model not found - check available models") + elif e.status_code == 429: + print("Rate limit exceeded - slow down requests") + elif e.status_code >= 500: + print("Gateway or provider error - retry with exponential backoff") + else: + print(f"Error: {e}") +---- + +Common error codes: + +* *400*: Bad request (invalid parameters, malformed JSON) +* *401*: Authentication failed (invalid or missing API token) +* *403*: Forbidden (no access to this gateway) +* *404*: Model not found (model not enabled in gateway) +* *429*: Rate limit exceeded (too many requests) +* *500/502/503*: Server error (gateway or provider issue) + +== Streaming responses + +AI Gateway supports streaming for real-time token generation: + +[source,python] +---- +response = client.chat.completions.create( + model="openai/gpt-5.2-mini", + messages=[{"role": "user", "content": "Write a short poem"}], + stream=True # Enable streaming +) + +# Process chunks as they arrive +for chunk in response: + if chunk.choices[0].delta.content: + print(chunk.choices[0].delta.content, end='', flush=True) + +print() # New line after streaming completes +---- + +== Switch between providers + +One of AI Gateway's key benefits is easy provider switching without code changes: + +[source,python] +---- +# Try OpenAI +response = client.chat.completions.create( + model="openai/gpt-5.2", + messages=[{"role": "user", "content": "Explain quantum computing"}] +) + +# Try Anthropic (same code, different model) +response = client.chat.completions.create( + model="anthropic/claude-sonnet-4.5", + messages=[{"role": "user", "content": "Explain quantum computing"}] +) +---- + +Compare responses, latency, and cost to determine the best model for your use case. + +== Validate your integration + +=== Test connectivity + +[source,python] +---- +import os +from openai import OpenAI + +def test_gateway_connection(): + """Test basic connectivity to AI Gateway""" + client = OpenAI( + base_url=os.getenv("REDPANDA_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_API_KEY"), + ) + + try: + # Simple test request + response = client.chat.completions.create( + model="openai/gpt-5.2-mini", + messages=[{"role": "user", "content": "test"}], + max_tokens=10 + ) + print("✓ Gateway connection successful") + return True + except Exception as e: + print(f"✗ Gateway connection failed: {e}") + return False + +if __name__ == "__main__": + test_gateway_connection() +---- + +=== Test multiple models + +[source,python] +---- +def test_models(): + """Test multiple models through the gateway""" + models = [ + "openai/gpt-5.2-mini", + "anthropic/claude-sonnet-4.5" + ] + + for model in models: + try: + response = client.chat.completions.create( + model=model, + messages=[{"role": "user", "content": "Say hello"}], + max_tokens=10 + ) + print(f"✓ {model}: {response.choices[0].message.content}") + except Exception as e: + print(f"✗ {model}: {e}") +---- + +// === Check request logs +// +// After making requests, verify they appear in observability: +// +// . Navigate to *AI Gateway* → *Gateways* → Select your gateway → *Logs* +// . Filter by your request timestamp +// . Verify your requests are logged with correct model, tokens, and cost + +// See xref:ai-gateway/builders/monitor-your-usage.adoc[] for details. + +== Integrate with AI development tools + +[tabs] +==== +Claude Code:: ++ +Configure Claude Code to use AI Gateway: ++ +[source,bash] +---- +claude mcp add --transport http redpanda-aigateway ${REDPANDA_GATEWAY_URL}/mcp \ + --header "Authorization: Bearer ${REDPANDA_API_KEY}" +---- ++ +Or edit `~/.claude/config.json`: ++ +[source,json] +---- +{ + "mcpServers": { + "redpanda-ai-gateway": { + "transport": "http", + "url": "/mcp", + "headers": { + "Authorization": "Bearer your-api-key" + } + } + } +} +---- ++ +ifdef::integrations-available[] +See xref:ai-gateway/integrations/claude-code-user.adoc[] for complete setup. +endif::[] + +VS Code Continue Extension:: ++ +Edit `~/.continue/config.json`: ++ +[source,json] +---- +{ + "models": [ + { + "title": "AI Gateway - GPT-5.2", + "provider": "openai", + "model": "openai/gpt-5.2", + "apiBase": "", + "apiKey": "your-redpanda-api-key" + } + ] +} +---- ++ +ifdef::integrations-available[] +See xref:ai-gateway/integrations/continue-user.adoc[] for complete setup. +endif::[] + +Cursor IDE:: ++ +. Open Cursor Settings (*Cursor* → *Settings* or `Cmd+,`) +. Navigate to *AI* settings +. Add custom OpenAI-compatible provider: ++ +[source,json] +---- +{ + "cursor.ai.providers.openai.apiBase": "" +} +---- ++ +ifdef::integrations-available[] +See xref:ai-gateway/integrations/cursor-user.adoc[] for complete setup. +endif::[] +==== + +== Best practices + +=== Use environment variables + +Store configuration in environment variables, not hardcoded in code: + +[source,python] +---- +# Good +base_url = os.getenv("REDPANDA_GATEWAY_URL") + +# Bad +base_url = "https://gw.ai.panda.com" # Don't hardcode +---- + +=== Implement retry logic + +Implement exponential backoff for transient errors: + +[source,python] +---- +import time +from openai import OpenAI, OpenAIError + +def make_request_with_retry(client, max_retries=3): + for attempt in range(max_retries): + try: + return client.chat.completions.create( + model="openai/gpt-5.2-mini", + messages=[{"role": "user", "content": "Hello"}] + ) + except OpenAIError as e: + if e.status_code >= 500 and attempt < max_retries - 1: + wait_time = 2 ** attempt # Exponential backoff + print(f"Retrying in {wait_time}s...") + time.sleep(wait_time) + else: + raise +---- + +=== Monitor your usage + +Regularly check your usage to avoid unexpected costs: + +[source,python] +---- +# Track tokens in your application +total_tokens = 0 +request_count = 0 + +for request in requests: + response = client.chat.completions.create(...) + total_tokens += response.usage.total_tokens + request_count += 1 + +print(f"Total tokens: {total_tokens} across {request_count} requests") +---- + +// See xref:ai-gateway/builders/monitor-your-usage.adoc[] for detailed monitoring. + +=== Handle rate limits gracefully + +Respect rate limits and implement backoff: + +[source,python] +---- +try: + response = client.chat.completions.create(...) +except OpenAIError as e: + if e.status_code == 429: + # Rate limited - wait and retry + retry_after = int(e.response.headers.get('Retry-After', 60)) + print(f"Rate limited. Waiting {retry_after}s...") + time.sleep(retry_after) + # Retry request +---- + +== Troubleshooting + +=== "Authentication failed" + +Problem: 401 Unauthorized + +Solutions: + +* Verify your API token is correct and not expired +* Check that the token has access to the specified gateway +* Ensure the `Authorization` header is formatted correctly: `Bearer ` + +=== "Model not found" + +Problem: 404 Model not found + +Solutions: + +* Verify the model name uses `vendor/model_id` format +// * Check available models: See xref:ai-gateway/builders/available-models.adoc[] +* Confirm the model is enabled in your gateway (contact administrator) + +=== "Rate limit exceeded" + +Problem: 429 Too Many Requests + +Solutions: + +* Reduce request rate +* Implement exponential backoff +* Contact administrator to review rate limits +* Consider using a different gateway if available + +=== "Connection timeout" + +Problem: Request times out + +Solutions: + +* Check network connectivity to the gateway endpoint +* Verify the gateway endpoint URL is correct +* Check if the gateway is operational (contact administrator) +* Increase client timeout if processing complex requests + +//== Next steps + +//Now that your agent is connected: + +// * xref:ai-gateway/builders/available-models.adoc[Available Models] - Learn about model selection and routing +// * xref:ai-gateway/builders/use-mcp-tools.adoc[Use MCP Tools] - Access tools from MCP servers (if enabled) +// * xref:ai-gateway/builders/monitor-your-usage.adoc[Monitor Your Usage] - Track requests and costs +ifdef::integrations-available[] +* xref:ai-gateway/integrations/index.adoc[Integrations] - Configure specific tools and IDEs +endif::[] diff --git a/modules/ai-agents/pages/ai-gateway/builders/discover-gateways.adoc b/modules/ai-agents/pages/ai-gateway/builders/discover-gateways.adoc new file mode 100644 index 000000000..17c9058a4 --- /dev/null +++ b/modules/ai-agents/pages/ai-gateway/builders/discover-gateways.adoc @@ -0,0 +1,310 @@ += Discover Available Gateways +:description: Find which AI Gateways you can access and their configurations. +:page-topic-type: how-to +:personas: app_developer +:learning-objective-1: List all AI Gateways you have access to and retrieve their endpoints and IDs +:learning-objective-2: View which models and MCP tools are available through each gateway +:learning-objective-3: Test gateway connectivity before integration + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +As a builder, you need to know which gateways are available to you before integrating your agent or application. This page shows you how to discover accessible gateways, understand their configurations, and verify connectivity. + +After reading this page, you will be able to: + +* [ ] List all AI Gateways you have access to and retrieve their endpoints and IDs +* [ ] View which models and MCP tools are available through each gateway +* [ ] Test gateway connectivity before integration + +== Before you begin + +* You have a Redpanda Cloud account with access to at least one AI Gateway +* You have access to the Redpanda Cloud Console or API credentials + +== List your accessible gateways + +[tabs] +==== +Using the Console:: ++ +. Navigate to *Gateways* in the Redpanda Cloud Console. +. Review the list of gateways you can access. For each gateway, you'll see the gateway name, ID, endpoint URL, status, available models, and provider performance. ++ +Click the Configuration, API, MCP Tools, and Changelog tabs for additional information. + +Using the API:: ++ +You can also list gateways programmatically: ++ +[source,bash] +---- +curl https://api.redpanda.com/v1/gateways \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" +---- ++ +Response: ++ +[source,json] +---- +{ + "gateways": [ + { + "id": "gw_abc123", + "name": "production-gateway", + "mode": "ai_hub", + "endpoint": "https://gw.ai.panda.com", + "status": "active", + "workspace_id": "ws_xyz789", + "created_at": "2025-01-15T10:30:00Z" + }, + { + "id": "gw_def456", + "name": "staging-gateway", + "mode": "custom", + "endpoint": "https://gw-staging.ai.panda.com", + "status": "active", + "workspace_id": "ws_xyz789", + "created_at": "2025-01-10T08:15:00Z" + } + ] +} +---- +==== + +== Understand gateway information + +Each gateway provides specific information you'll need for integration: + +=== Gateway endpoint + +The gateway endpoint is the URL where you send all API requests. It replaces direct provider URLs (like `api.openai.com` or `api.anthropic.com`). The gateway ID is embedded directly in the endpoint URL. + +Example: +[source,bash] +---- +https://example/gateways/gw_abc123/v1 +---- + +Your application configures this as the `base_url` in your SDK client. + +=== Available models + +Each gateway exposes specific models based on administrator configuration. Models use the `vendor/model_id` format: + +* `openai/gpt-5.2` +* `anthropic/claude-sonnet-4.5` +* `openai/gpt-5.2-mini` + +To see which models are available through a specific gateway: + +[source,bash] +---- +curl ${GATEWAY_ENDPOINT}/models \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" +---- + +Response: + +[source,json] +---- +{ + "object": "list", + "data": [ + { + "id": "openai/gpt-5.2", + "object": "model", + "owned_by": "openai" + }, + { + "id": "anthropic/claude-sonnet-4.5", + "object": "model", + "owned_by": "anthropic" + }, + { + "id": "openai/gpt-5.2-mini", + "object": "model", + "owned_by": "openai" + } + ] +} +---- + +=== Rate limits and quotas + +Each gateway may have configured rate limits and monthly budgets. Check the console or contact your administrator to understand: + +* Requests per minute/hour/day +* Monthly spend limits +* Token usage quotas + +These limits help control costs and ensure fair resource allocation across teams. + +=== MCP Tools + +If glossterm:MCP[,Model Context Protocol (MCP)] aggregation is enabled for your gateway, you can access glossterm:MCP tool[,tools] from multiple glossterm:MCP server[,MCP servers] through a single endpoint. + +To discover available MCP tools: + +[source,bash] +---- +curl ${GATEWAY_ENDPOINT}/mcp/tools \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + -H "rp-aigw-mcp-deferred: true" +---- + +With deferred loading enabled, you'll receive search and orchestrator tools initially. You can then query for specific tools as needed. + +// See xref:ai-gateway/builders/use-mcp-tools.adoc[] for more details. + +ifdef::ai-hub-available[] +== Identify gateway mode + +Gateways can operate in two modes: AI Hub mode or Custom mode. Understanding which mode your gateway uses helps you know what to expect. + +include::ai-agents:partial$ai-hub-mode-indicator.adoc[] + +=== What it means for builders + +*AI Hub Mode:* + +* Routing is pre-configured and intelligent +* Models are automatically routed based on system-managed rules +* You cannot see or modify routing rules (they're managed by Redpanda) +* Limited customization via administrator-configured preference toggles +* See xref:ai-gateway/builders/use-ai-hub-gateway.adoc[] for AI Hub-specific guidance + +*Custom Mode:* + +* Routing is configured by your administrator +* You can view configured routing rules in the console +* Administrator has full control over backend pools and policies +* Standard discovery and usage patterns apply (rest of this page) + +[TIP] +==== +If you need specific routing behavior or custom configuration that AI Hub doesn't support, ask your administrator about ejecting to Custom mode or creating a Custom mode gateway. +==== +endif::[] + +== Check gateway availability + +Before integrating your application, verify that you can successfully connect to the gateway: + +=== Test connectivity + +[source,bash] +---- +curl ${GATEWAY_ENDPOINT}/models \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + -v +---- + +Expected result: HTTP 200 response with a list of available models. + +=== Test a simple request + +Send a minimal chat completion request to verify end-to-end functionality: + +[source,bash] +---- +curl ${GATEWAY_ENDPOINT}/chat/completions \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + -H "Content-Type: application/json" \ + -d '{ + "model": "openai/gpt-5.2-mini", + "messages": [{"role": "user", "content": "Hello"}], + "max_tokens": 10 + }' +---- + +Expected result: HTTP 200 response with a completion. + +=== Troubleshoot connectivity issues + +If you cannot connect to a gateway: + +. *Verify authentication*: Ensure your API token is valid and has not expired +. *Check gateway endpoint*: Confirm the endpoint URL includes the correct gateway ID +. *Verify endpoint URL*: Check for typos in the gateway endpoint +. *Check permissions*: Confirm with your administrator that you have access to this gateway +. *Review network connectivity*: Ensure your network allows outbound HTTPS connections + +== Choose the right gateway + +If you have access to multiple gateways, consider which one to use based on your needs: + +=== By environment + +Organizations often create separate gateways for different environments: + +* Production gateway: Higher rate limits, access to all models, monitoring enabled +* Staging gateway: Lower rate limits, restricted models, aggressive cost controls +* Development gateway: Minimal limits, all models for experimentation + +Choose the gateway that matches your deployment environment. + +=== By team or project + +Gateways may be organized by team or project for cost tracking and isolation: + +* team-ml-gateway: For machine learning team +* team-product-gateway: For product team +* customer-facing-gateway: For production customer workloads + +Use the gateway designated for your team to ensure proper cost attribution. + +=== By capability + +Different gateways may have different features enabled: + +* Gateway with MCP tools: Use if your agent needs to call tools +* Gateway without MCP: Use for simple LLM completions +* Gateway with specific models: Use if you need access to particular models + +== Example: Complete discovery workflow + +Here's a complete workflow to discover and validate gateway access: + +[source,bash] +---- +#!/bin/bash + +# Set your API token +export REDPANDA_CLOUD_TOKEN="your-token-here" + +# Step 1: List all accessible gateways +echo "=== Discovering gateways ===" +curl -s https://api.redpanda.com/v1/gateways \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + | jq '.gateways[] | {name: .name, id: .id, endpoint: .endpoint}' + +# Step 2: Select a gateway (example) +export GATEWAY_ENDPOINT="https://example/gateways/gw_abc123/v1" + +# Step 3: List available models +echo -e "\n=== Available models ===" +curl -s ${GATEWAY_ENDPOINT}/models \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + | jq '.data[] | .id' + +# Step 4: Test with a simple request +echo -e "\n=== Testing request ===" +curl -s ${GATEWAY_ENDPOINT}/chat/completions \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + -H "Content-Type: application/json" \ + -d '{ + "model": "openai/gpt-5.2-mini", + "messages": [{"role": "user", "content": "Say hello"}], + "max_tokens": 10 + }' \ + | jq '.choices[0].message.content' + +echo -e "\n=== Gateway validated successfully ===" +---- + +== Next steps + +* xref:ai-gateway/builders/connect-your-agent.adoc[Connect Your Agent] - Integrate your application +// * xref:ai-gateway/builders/available-models.adoc[Available Models] - Learn about model selection and routing +// * xref:ai-gateway/builders/use-mcp-tools.adoc[Use MCP Tools] - Access tools from MCP servers +// * xref:ai-gateway/builders/monitor-your-usage.adoc[Monitor Your Usage] - Track requests and costs diff --git a/modules/ai-agents/pages/ai-gateway/cel-routing-cookbook.adoc b/modules/ai-agents/pages/ai-gateway/cel-routing-cookbook.adoc new file mode 100644 index 000000000..a23e4ab14 --- /dev/null +++ b/modules/ai-agents/pages/ai-gateway/cel-routing-cookbook.adoc @@ -0,0 +1,953 @@ += CEL Routing Cookbook +:description: CEL routing cookbook for Redpanda AI Gateway with common patterns, examples, and best practices. +:page-topic-type: cookbook +:personas: app_developer, platform_admin +:learning-objective-1: Write CEL expressions to route requests based on user tier or custom headers +:learning-objective-2: Test CEL routing logic using the UI editor or test requests +:learning-objective-3: Troubleshoot common CEL errors using safe patterns + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +Redpanda AI Gateway uses CEL (Common Expression Language) for dynamic request routing. CEL expressions evaluate request properties (headers, body, context) and determine which model or provider should handle each request. + +CEL enables: + +* User-based routing (free vs premium tiers) +* Content-based routing (by prompt topic, length, complexity) +* Environment-based routing (staging vs production models) +* Cost controls (reject expensive requests in test environments) +* A/B testing (route percentage of traffic to new models) +* Geographic routing (by region header) +* Custom business logic (any condition you can express) + +== CEL basics + +=== What is CEL? + +CEL (Common Expression Language) is a non-Turing-complete expression language designed for fast, safe evaluation. It's used by Google (Firebase, Cloud IAM), Kubernetes, Envoy, and other systems. + +Key properties: + +* Safe: Cannot loop infinitely or access system resources +* Fast: Evaluates in microseconds +* Readable: Similar to Python/JavaScript expressions +* Type-safe: Errors caught at configuration time, not runtime + +=== CEL syntax primer + +Comparison operators: + +[source,cel] +---- +== // equal +!= // Not equal +< // Less than +> // Greater than +<= // Less than or equal +>= // Greater than or equal +---- + + +Logical operators: + +[source,cel] +---- +&& // AND +|| // OR +! // NOT +---- + + +Ternary operator (most common pattern): + +[source,cel] +---- +condition ? value_if_true : value_if_false +---- + + +Functions: + +[source,cel] +---- +.size() // Length of string or array +.contains("text") // String contains substring +.startsWith("x") // String starts with +.endsWith("x") // String ends with +.matches("regex") // Regex match +has(field) // Check if field exists +---- + + +Examples: + +[source,cel] +---- +// Simple comparison +request.headers["tier"] == "premium" + +// Ternary (if-then-else) +request.headers["tier"] == "premium" ? "openai/gpt-5.2" : "openai/gpt-5.2-mini" + +// Logical AND +request.headers["tier"] == "premium" && request.headers["region"] == "us" + +// String contains +request.body.messages[0].content.contains("urgent") + +// Size check +request.body.messages.size() > 10 +---- + + +== Request object schema + +CEL expressions evaluate against the `request` object, which contains: + +=== `request.headers` (map) + +All HTTP headers (lowercase keys). + +[source,cel] +---- +request.headers["x-user-tier"] // Custom header +request.headers["x-customer-id"] // Custom header +request.headers["user-agent"] // Standard header +request.headers["x-request-id"] // Standard header +---- + + +NOTE: Header names are case-insensitive in HTTP, but CEL requires lowercase keys. + +=== `request.body` (object) + +The JSON request body (for `/chat/completions`). + +[source,cel] +---- +request.body.model // String: Requested model +request.body.messages // Array: Conversation messages +request.body.messages[0].role // String: "system", "user", "assistant" +request.body.messages[0].content // String: Message content +request.body.messages.size() // Int: Number of messages +request.body.max_tokens // Int: Max completion tokens (if set) +request.body.temperature // Float: Temperature (if set) +request.body.stream // Bool: Streaming enabled (if set) +---- + + +NOTE: Fields are optional. Use `has()` to check existence: + +[source,cel] +---- +has(request.body.max_tokens) ? request.body.max_tokens : 1000 +---- + + +=== `request.path` (string) + +The request path. + +[source,cel] +---- +request.path == "/v1/chat/completions" +request.path.startsWith("/v1/") +---- + + +=== `request.method` (string) + +The HTTP method. + +[source,cel] +---- +request.method == "POST" +---- + + +== CEL routing patterns + +Each pattern follows this structure: + +* When to use: Scenario description +* Expression: CEL code +* What happens: Routing behavior +* Verify: How to test +* Cost/performance impact: Implications + +=== Tier-based routing + +When to use: Different user tiers (free, pro, enterprise) should get different model quality + +Expression: + +[source,cel] +---- +request.headers["x-user-tier"] == "enterprise" ? "openai/gpt-5.2" : +request.headers["x-user-tier"] == "pro" ? "anthropic/claude-sonnet-4.5" : +"openai/gpt-5.2-mini" +---- + + +What happens: + +* Enterprise users → GPT-5.2 (best quality) +* Pro users → Claude Sonnet 4.5 (balanced) +* Free users → GPT-5.2-mini (cost-effective) + +Verify: + +[source,python] +---- +# Test enterprise +response = client.chat.completions.create( + model="openai/gpt-5.2", # CEL routing rules override model selection + messages=[{"role": "user", "content": "Test"}], + extra_headers={"x-user-tier": "enterprise"} +) +# Check logs: Should route to openai/gpt-5.2 + +# Test free +response = client.chat.completions.create( + model="openai/gpt-5.2", # CEL routing rules override model selection + messages=[{"role": "user", "content": "Test"}], + extra_headers={"x-user-tier": "free"} +) +# Check logs: Should route to openai/gpt-5.2-mini +---- + + +Cost impact: + +* Enterprise: ~$5.00 per 1K requests +* Pro: ~$3.50 per 1K requests +* Free: ~$0.50 per 1K requests + +Use case: SaaS product with tiered pricing where model quality is a differentiator + +=== Environment-based routing + +When to use: Prevent staging from using expensive models + +Expression: + +[source,cel] +---- +request.headers["x-environment"] == "production" + ? "openai/gpt-5.2" + : "openai/gpt-5.2-mini" +---- + + +What happens: + +* Production → GPT-5.2 (best quality) +* Staging/dev → GPT-5.2-mini (10x cheaper) + +Verify: + +[source,python] +---- +# Set environment header +response = client.chat.completions.create( + model="openai/gpt-5.2", # CEL routing rules override model selection + messages=[{"role": "user", "content": "Test"}], + extra_headers={"x-environment": "staging"} +) +# Check logs: Should route to gpt-5.2-mini +---- + + +Cost impact: + +* Prevents staging from inflating costs +* Example: Staging with 100K test requests/day + * GPT-5.2: $500/day ($15K/month) + * GPT-5.2-mini: $50/day ($1.5K/month) + * *Savings: $13.5K/month* + +Use case: Protect against runaway staging costs + + +=== Content-length guard rails + +When to use: Block or downgrade long prompts to prevent cost spikes + +//// +Expression (Block): + +[source,cel] +---- +request.body.messages.size() > 10 || request.body.max_tokens > 4000 + ? "reject" + : "openai/gpt-5.2" +---- + +What happens: +* Requests with >10 messages or >4000 max_tokens -> Rejected with 400 error +* Normal requests -> GPT-5.2 +//// + +Expression (Downgrade): + +[source,cel] +---- +request.body.messages.size() > 10 || request.body.max_tokens > 4000 + ? "openai/gpt-5.2-mini" // Cheaper model + : "openai/gpt-5.2" // Normal model +---- + + +What happens: + +* Long conversations → Downgraded to cheaper model +* Short conversations → Premium model + +Verify: + +[source,python] +---- +# Test rejection +response = client.chat.completions.create( + model="openai/gpt-5.2", # CEL routing rules override model selection + messages=[{"role": "user", "content": f"Message {i}"} for i in range(15)], + max_tokens=5000 +) +# Should return 400 error (rejected) + +# Test normal +response = client.chat.completions.create( + model="openai/gpt-5.2", # CEL routing rules override model selection + messages=[{"role": "user", "content": "Short message"}], + max_tokens=100 +) +# Should route to gpt-5.2 +---- + + +Cost impact: + +* Prevents unexpected bills from verbose prompts +* Example: Block requests >10K tokens (would cost $0.15 each) + +Use case: Staging cost controls, prevent prompt injection attacks that inflate token usage + +=== Topic-based routing + +When to use: Route different question types to specialized models + +Expression: + +[source,cel] +---- +request.body.messages[0].content.contains("code") || +request.body.messages[0].content.contains("debug") || +request.body.messages[0].content.contains("programming") + ? "openai/gpt-5.2" // Better at code + : "anthropic/claude-sonnet-4.5" // Better at general writing +---- + + +What happens: + +* Coding questions → GPT-5.2 (optimized for code) +* General questions → Claude Sonnet (better prose) + +Verify: + +[source,python] +---- +# Test code question +response = client.chat.completions.create( + model="openai/gpt-5.2", # CEL routing rules override model selection + messages=[{"role": "user", "content": "Debug this Python code: ..."}] +) +# Check logs: Should route to gpt-5.2 + +# Test general question +response = client.chat.completions.create( + model="openai/gpt-5.2", # CEL routing rules override model selection + messages=[{"role": "user", "content": "Write a blog post about AI"}] +) +# Check logs: Should route to claude-sonnet-4.5 +---- + + +Cost impact: + +* Optimize model selection for task type +* Could improve quality without increasing costs + +Use case: Multi-purpose chatbot with both coding and general queries + + +=== Geographic/regional routing + +When to use: Route by user region to different providers or gateways for compliance or latency optimization + +Expression: + +[source,cel] +---- +request.headers["x-user-region"] == "eu" + ? "anthropic/claude-sonnet-4.5" // EU traffic to Anthropic + : "openai/gpt-5.2" // Other traffic to OpenAI +---- + + +What happens: + +* EU users -> Anthropic (for EU data processing requirements) +* Other users -> OpenAI (default provider) + +NOTE: To achieve true data residency, configure separate gateways per region with provider pools that meet your compliance requirements. + +Verify: + +[source,python] +---- +response = client.chat.completions.create( + model="openai/gpt-5.2", # CEL routing rules override model selection + messages=[{"role": "user", "content": "Test"}], + extra_headers={"x-user-region": "eu"} +) +# Check logs: Should route to anthropic/claude-sonnet-4.5 +---- + + +Cost impact: Varies by provider pricing + +Use case: GDPR compliance, data residency requirements + + +=== Customer-specific routing + +When to use: Different customers have different model access (enterprise features) + +Expression: + +[source,cel] +---- +request.headers["x-customer-id"] == "customer_vip_123" + ? "anthropic/claude-opus-4.6" // Most expensive, best quality + : "anthropic/claude-sonnet-4.5" // Standard +---- + + +What happens: + +* VIP customer → Best model +* Standard customers → Normal model + +Verify: + +[source,python] +---- +response = client.chat.completions.create( + model="openai/gpt-5.2", # CEL routing rules override model selection + messages=[{"role": "user", "content": "Test"}], + extra_headers={"x-customer-id": "customer_vip_123"} +) +# Check logs: Should route to claude-opus-4 +---- + + +Cost impact: + +* VIP: ~$7.50 per 1K requests +* Standard: ~$3.50 per 1K requests + +Use case: Enterprise contracts with premium model access + + +//// +=== A/B testing (percentage-based routing) + +When to use: Test new models with a percentage of traffic + +PLACEHOLDER: Confirm if CEL can access random functions or if A/B testing requires different mechanism + +Expression (if random is available): + +[source,cel] +---- +PLACEHOLDER: Verify CEL random function availability +random() < 0.10 + ? "anthropic/claude-opus-4.6" // 10% traffic to new model + : "openai/gpt-5.2" // 90% traffic to existing model +---- + + +Alternative (hash-based): + +[source,cel] +---- +// Use customer ID hash for stable routing +hash(request.headers["x-customer-id"]) % 100 < 10 + ? "anthropic/claude-opus-4.6" + : "openai/gpt-5.2" +---- + + +What happens: + +* 10% of requests -> New model (Opus 4) +* 90% of requests -> Existing model (GPT-5.2) + +Verify: + +[source,python] +---- +# Send 100 requests, count which model was used +for i in range(100): + response = client.chat.completions.create( + model="openai/gpt-5.2", + messages=[{"role": "user", "content": f"Test {i}"}], + extra_headers={"x-customer-id": f"customer_{i}"} + ) +# Check logs: ~10 should use opus-4.6, ~90 should use gpt-5.2 +---- + + +Cost impact: + +* Allows safe, incremental rollout of new models +* Monitor quality/cost for new model before full adoption + +Use case: Evaluate new models in production with real traffic +//// + +=== Complexity-based routing + +When to use: Route simple queries to cheap models, complex queries to expensive models + +Expression: + +[source,cel] +---- +request.body.messages.size() == 1 && +request.body.messages[0].content.size() < 100 + ? "openai/gpt-5.2-mini" // Simple, short question + : "openai/gpt-5.2" // Complex or long conversation +---- + + +What happens: + +* Single short message (<100 chars) → Cheap model +* Multi-turn or long messages → Premium model + +Verify: + +[source,python] +---- +# Test simple +response = client.chat.completions.create( + model="openai/gpt-5.2", # CEL routing rules override model selection + messages=[{"role": "user", "content": "Hi"}] # 2 chars +) +# Check logs: Should route to gpt-5.2-mini + +# Test complex +response = client.chat.completions.create( + model="openai/gpt-5.2", # CEL routing rules override model selection + messages=[ + {"role": "user", "content": "Long question here..." * 10}, + {"role": "assistant", "content": "Response"}, + {"role": "user", "content": "Follow-up"} + ] +) +# Check logs: Should route to gpt-5.2 +---- + + +Cost impact: + +* Can reduce costs significantly if simple queries are common +* Example: 50% of queries are simple, save 90% on those = 45% total savings + +Use case: FAQ chatbot with mix of simple lookups and complex questions + +//// +=== Time-based routing + +When to use: Use cheaper models during off-peak hours + +PLACEHOLDER: Confirm if CEL has access to current timestamp + +Expression (if time functions available): + +[source,cel] +---- +PLACEHOLDER: Verify CEL time function availability +now().hour >= 22 || now().hour < 6 // 10pm - 6am + ? "openai/gpt-5.2-mini" // Off-peak: cheaper model + : "openai/gpt-5.2" // Peak hours: best model +---- + + +What happens: + +* Off-peak hours (10pm-6am) -> Cheap model +* Peak hours (6am-10pm) -> Premium model + +Cost impact: + +* Optimize for user experience during peak usage +* Save costs during low-traffic hours + +Use case: Consumer apps with time-zone-specific usage patterns +//// + + +=== Fallback chain (multi-level) + +When to use: Complex fallback logic beyond simple primary/secondary + +Expression: + +[source,cel] +---- +request.headers["x-priority"] == "critical" + ? "openai/gpt-5.2" // First choice for critical + : request.headers["x-user-tier"] == "premium" + ? "anthropic/claude-sonnet-4.5" // Second choice for premium + : "openai/gpt-5.2-mini" // Default for everyone else +---- + + +What happens: + +* Critical requests → Always GPT-5.2 +* Premium non-critical → Claude Sonnet +* Everyone else → GPT-5.2-mini + +Verify: Test with different header combinations + +Cost impact: Ensures SLA for critical requests while optimizing costs elsewhere + +Use case: Production systems with SLA requirements + + +== Advanced CEL patterns + +=== Default values with `has()` + +Problem: Field might not exist in request + +Expression: + +[source,cel] +---- +has(request.body.max_tokens) && request.body.max_tokens > 2000 + ? "openai/gpt-5.2" // Long response expected + : "openai/gpt-5.2-mini" // Short response +---- + + +What happens: Safely checks if `max_tokens` exists before comparing + +=== Multiple conditions with parentheses + +Expression: + +[source,cel] +---- +(request.headers["x-user-tier"] == "premium" || + request.headers["x-customer-id"] == "vip_123") && +request.headers["x-environment"] == "production" + ? "openai/gpt-5.2" + : "openai/gpt-5.2-mini" +---- + + +What happens: Premium users OR VIP customer, AND production → GPT-5.2 + +=== Regex matching + +Expression: + +[source,cel] +---- +request.body.messages[0].content.matches("(?i)(urgent|asap|emergency)") + ? "openai/gpt-5.2" // Route urgent requests to best model + : "openai/gpt-5.2-mini" +---- + + +What happens: Messages containing "urgent", "ASAP", or "emergency" (case-insensitive) → GPT-5.2 + +=== String array contains + +Expression: + +[source,cel] +---- +["customer_1", "customer_2", "customer_3"].exists(c, c == request.headers["x-customer-id"]) + ? "openai/gpt-5.2" // Whitelist of customers + : "openai/gpt-5.2-mini" +---- + + +What happens: Only specific customers get premium model + +//// +=== Reject invalid requests + +Expression: + +[source,cel] +---- +!has(request.body.messages) || request.body.messages.size() == 0 + ? "reject" // PLACEHOLDER: Confirm "reject" is supported + : "openai/gpt-5.2" +---- + +What happens: Requests without messages are rejected (400 error) +//// + +== Test CEL expressions + +=== Option 1: CEL editor in UI (if available) + +1. Navigate to Gateways → Routing Rules +2. Enter CEL expression +3. Click "Test" +4. Input test headers/body +5. View evaluated result + +=== Option 2: Send test requests + +[source,python] +---- +def test_cel_routing(headers, messages): + """Test CEL routing with specific headers and messages""" + response = client.chat.completions.create( + model="openai/gpt-5.2", # CEL routing rules override model selection + messages=messages, + extra_headers=headers, + max_tokens=10 # Keep it cheap + ) + + # Check logs to see which model was used + print(f"Headers: {headers}") + print(f"Routed to: {response.model}") + +# Test tier-based routing +test_cel_routing( + {"x-user-tier": "premium"}, + [{"role": "user", "content": "Test"}] +) +test_cel_routing( + {"x-user-tier": "free"}, + [{"role": "user", "content": "Test"}] +) +---- + + +//// +=== Option 3: CLI test (if available) + +[source,bash] +---- +# PLACEHOLDER: If CLI tool exists for testing CEL +rpk cloud ai-gateway test-cel \ + --gateway-id gw_abc123 \ + --expression 'request.headers["tier"] == "premium" ? "openai/gpt-5.2" : "openai/gpt-5.2-mini"' \ + --header 'tier: premium' \ + --body '{"messages": [{"role": "user", "content": "Test"}]}' + +# Expected output: openai/gpt-5.2 +---- +//// + + +== Common CEL errors + +=== Error: "unknown field" + +Symptom: + +[source,text] +---- +Error: Unknown field 'request.headers.x-user-tier' +---- + + +Cause: Wrong syntax (dot notation instead of bracket notation for headers) + +Fix: + +[source,cel] +---- +// Wrong +request.headers.x-user-tier + +// Correct +request.headers["x-user-tier"] +---- + + +=== Error: "type mismatch" + +Symptom: + +[source,text] +---- +Error: Type mismatch: expected bool, got string +---- + + +Cause: Forgot comparison operator + +Fix: + +[source,cel] +---- +// Wrong (returns string) +request.headers["tier"] + +// Correct (returns bool) +request.headers["tier"] == "premium" +---- + + +=== Error: "field does not exist" + +Symptom: + +[source,text] +---- +Error: No such key: max_tokens +---- + + +Cause: Accessing field that doesn't exist in request + +Fix: +[source,cel] +---- +// Wrong (crashes if max_tokens not in request) +request.body.max_tokens > 1000 + +// Correct (checks existence first) +has(request.body.max_tokens) && request.body.max_tokens > 1000 +---- + + +=== Error: "index out of bounds" + +Symptom: + +[source,text] +---- +Error: Index 0 out of bounds for array of size 0 +---- + + +Cause: Accessing array element that doesn't exist + +Fix: + +[source,cel] +---- +// Wrong (crashes if messages empty) +request.body.messages[0].content.contains("test") + +// Correct (checks size first) +request.body.messages.size() > 0 && request.body.messages[0].content.contains("test") +---- + + +== CEL performance considerations + +=== Expression complexity + +Fast (<1ms evaluation): + +[source,cel] +---- +request.headers["tier"] == "premium" ? "openai/gpt-5.2" : "openai/gpt-5.2-mini" +---- + + +Slower (~5-10ms evaluation): + +[source,cel] +---- +request.body.messages[0].content.matches("complex.*regex.*pattern") +---- + + +Recommendation: Keep expressions simple. Complex regex can add latency. + +=== Number of evaluations + +Each request evaluates CEL expression once. Total latency impact: +* Simple expression: <1ms +* Complex expression: ~5-10ms + +*Acceptable for most use cases.* + +== CEL function reference + +=== String functions + +[cols="2,3,3"] +|=== +| Function | Description | Example + +| `size()` +| String length +| `"hello".size() == 5` + +| `contains(s)` +| String contains +| `"hello".contains("ell")` + +| `startsWith(s)` +| String starts with +| `"hello".startsWith("he")` + +| `endsWith(s)` +| String ends with +| `"hello".endsWith("lo")` + +| `matches(regex)` +| Regex match +| `"hello".matches("h.*o")` +|=== + +=== Array functions + +[cols="2,3,3"] +|=== +| Function | Description | Example + +| `size()` +| Array length +| `[1,2,3].size() == 3` + +| `exists(x, cond)` +| Any element matches +| `[1,2,3].exists(x, x > 2)` + +| `all(x, cond)` +| All elements match +| `[1,2,3].all(x, x > 0)` +|=== + +=== Utility functions + +[cols="2,3,3"] +|=== +| Function | Description | Example + +| `has(field)` +| Field exists +| `has(request.body.max_tokens)` +|=== + +== Next steps + +* *Apply CEL routing*: See the gateway configuration options available in the Redpanda Cloud console. diff --git a/modules/ai-agents/pages/ai-gateway/gateway-architecture.adoc b/modules/ai-agents/pages/ai-gateway/gateway-architecture.adoc new file mode 100644 index 000000000..c3b59fb7b --- /dev/null +++ b/modules/ai-agents/pages/ai-gateway/gateway-architecture.adoc @@ -0,0 +1,221 @@ += AI Gateway Architecture +:description: Technical architecture of Redpanda AI Gateway, including how the control plane, data plane, and observability plane deliver high availability, cost governance, and multi-tenant isolation. +:page-topic-type: concept +:personas: app_developer, platform_admin +:learning-objective-1: Describe the three architectural planes of AI Gateway +:learning-objective-2: Explain the request lifecycle through policy evaluation stages +:learning-objective-3: Identify supported providers, features, and current limitations + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +This page provides technical details about AI Gateway's architecture, request processing, and capabilities. For an introduction to AI Gateway and the problems it solves, see xref:ai-agents:ai-gateway/what-is-ai-gateway.adoc[] + +== Architecture overview + +AI Gateway consists of three planes: a glossterm:control plane[] for configuration and management, a glossterm:data plane[] for request processing and routing, and an observability plane for monitoring and analytics. + +// PLACEHOLDER: Add architecture diagram showing: +// 1. Control Plane: +// - Workspace management +// - Provider/model configuration +// - Gateway creation and policy definition +// - Admin console +// +// 2. Data Plane: +// - Request ingestion +// - Policy evaluation (rate limits → spend limits → routing → execution) +// - Provider pool selection and failover +// - MCP aggregation layer +// - Response logging and metrics +// +// 3. Observability Plane: +// - Request logs storage +// - Metrics aggregation +// - Dashboard UI + +=== Control plane + +The control plane manages gateway configuration and policy definition: + +* **Workspace management**: Multi-tenant isolation with separate namespaces for different teams or environments +* **Provider configuration**: Enable and configure LLM providers (OpenAI, Anthropic, etc.) +* **Gateway creation**: Define gateways with specific routing rules, budgets, and rate limits +* **Policy definition**: Create CEL-based routing policies, spend limits, and rate limits +* **MCP server registration**: Configure which MCP servers are available to agents + +=== Data plane + +The data plane handles all runtime request processing: + +* **Request ingestion**: Accept requests via OpenAI-compatible API endpoints +* **Authentication**: Validate API keys and gateway access +* **Policy evaluation**: Apply rate limits, spend limits, and routing policies +* **Provider pool management**: Select primary or fallback providers based on availability +* **MCP aggregation**: Aggregate tools from multiple MCP servers with deferred loading +* **Response transformation**: Normalize provider-specific responses to OpenAI format +* **Metrics collection**: Record token usage, latency, and cost for every request + +=== Observability plane + +The observability plane provides monitoring and analytics: + +* **Request logs**: Store full request/response history with prompt and completion content +* **Metrics aggregation**: Calculate token usage, costs, latency percentiles, and error rates +* **Dashboard UI**: Display real-time and historical analytics per gateway, model, or provider +* **Cost tracking**: Estimate spend based on provider pricing and token consumption + +== Request lifecycle + +When a request flows through AI Gateway, it passes through several policy and routing stages before reaching the LLM provider. Understanding this lifecycle helps you configure policies effectively and troubleshoot issues: + +. Application sends request to gateway endpoint +. Gateway authenticates request +. Rate limit policy evaluates (allow/deny) +. Spend limit policy evaluates (allow/deny) +. Routing policy evaluates (which model/provider to use) +. Provider pool selects backend (primary/fallback) +. Request forwarded to LLM provider +. Response returned to application +. Request logged with tokens, cost, latency, status + +Each policy evaluation happens synchronously in the request path. If rate limits or spend limits reject the request, the gateway returns an error immediately without calling the LLM provider, which helps you control costs. + +=== MCP tool request lifecycle + +For MCP tool requests, the lifecycle differs slightly to support deferred tool loading: + +. Application discovers tools via `/mcp` endpoint +. Gateway aggregates tools from approved MCP servers +. Application receives search + orchestrator tools (deferred loading) +. Application invokes specific tool +. Gateway routes to appropriate MCP server +. Tool execution result returned +. Request logged with execution time, status + +The gateway only loads and exposes specific tools when requested, which dramatically reduces the token overhead compared to loading all tools upfront. + +ifdef::ai-hub-available[] +== AI Hub mode architecture + +AI Gateway supports two modes. In Custom mode, administrators configure all routing rules and backend pools manually. In AI Hub mode, the gateway provides pre-configured intelligent routing. + +=== Intelligent router + +AI Hub mode implements an intelligent router with immutable system rules and user-configurable preferences: + +*6 Pre-configured Backend Pools:* + +* OpenAI (standard requests) +* OpenAI Streaming +* Anthropic with OpenAI-compatible transform (standard requests) +* Anthropic with OpenAI-compatible transform (streaming) +* Anthropic Native (direct passthrough for `/v1/messages`) +* Anthropic Native Streaming + +*17 System Routing Rules:* + +Immutable rules that route requests based on: + +* Model prefix: `openai/*`, `anthropic/*` +* Model name patterns: `gpt-*`, `claude-*`, `o1-*` +* Special routing: embeddings, images, audio, content moderation, legacy completions → OpenAI only +* Native SDK detection: `/v1/messages` → Anthropic passthrough +* Streaming detection → Extended timeout backends + +*Automatic Failover:* + +Built-in fallback behavior when primary providers are unavailable (configurable via preference toggles). + +*6 User Preference Toggles:* + +Configurable preferences that influence routing without modifying rules (see xref:ai-gateway/admin/configure-ai-hub.adoc[] for details). + +Configurable preferences that influence routing without modifying rules. + +=== System-managed vs user-configurable resources + +In AI Hub mode, resources are divided into two categories: + +*System-Managed Resources* (immutable): + +* Backend pool definitions +* Core routing rules +* Failover logic +* Provider selection algorithms + +*User-Configurable Resources:* + +* Provider credentials (OpenAI, Anthropic, Google Gemini) +* 6 preference toggles +* Rate limits (within bounds) +* Spend limits + +This separation ensures consistent, reliable behavior while allowing customization of common preferences. + +=== Ejecting to Custom mode + +Gateways can be ejected from AI Hub mode to Custom mode in a one-way transition. After ejection: + +* `gateway.mode` changes from `ai_hub` to `custom` +* All previously system-managed resources become user-configurable +* No more automatic AI Hub version updates +* Full control over routing rules, backend pools, and policies + +This allows organizations to start with zero-configuration simplicity and graduate to full control when needed. + +See xref:ai-gateway/admin/eject-to-custom-mode.adoc[] for the ejection process. +enddef::[] + +// == Supported features + +// === LLM providers + +// * OpenAI +// * Anthropic +// * // PLACEHOLDER: Google, AWS Bedrock, Azure OpenAI, others? + +// === API compatibility + +// * OpenAI-compatible `/v1/chat/completions` endpoint +// * // PLACEHOLDER: Streaming support? +// * // PLACEHOLDER: Embeddings support? +// * // PLACEHOLDER: Other endpoints? + +// === Policy features + +// * CEL-based routing expressions +// * Rate limiting (// PLACEHOLDER: per-gateway, per-header, per-tenant?) +// * Monthly spend limits (// PLACEHOLDER: per-gateway, per-workspace?) +// * Provider pools with automatic failover +// * // PLACEHOLDER: Caching support? + +// === MCP support + +// * MCP server aggregation +// * Deferred tool loading (often 80-90% token reduction depending on configuration) +// * JavaScript orchestrator for multi-step workflows +// * PLACEHOLDER: Tool execution sandboxing? + +// === Observability + +// * Request logs with full prompt/response history +// * Token usage tracking +// * Estimated cost per request +// * Latency metrics +// * PLACEHOLDER: Metrics export? OpenTelemetry support? + +// == Current limitations + +// * // PLACEHOLDER: List current limitations, for example: +// ** // - Custom model deployments (Azure OpenAI BYOK, AWS Bedrock custom models) +// ** // - Response caching +// ** // - Prompt templates/versioning +// ** // - Guardrails (PII detection, content moderation) +// ** // - Multi-region active-active deployment +// ** // - Metrics export to external systems +// ** // - Budget alerts/notifications + +== Next steps + +* xref:ai-agents:ai-gateway/gateway-quickstart.adoc[]: Route your first request through AI Gateway +* xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[]: Configure MCP server aggregation for AI agents diff --git a/modules/ai-agents/pages/ai-gateway/gateway-quickstart.adoc b/modules/ai-agents/pages/ai-gateway/gateway-quickstart.adoc new file mode 100644 index 000000000..5f51a689a --- /dev/null +++ b/modules/ai-agents/pages/ai-gateway/gateway-quickstart.adoc @@ -0,0 +1,546 @@ += AI Gateway Quickstart +:description: Get started with AI Gateway. Configure providers, create your first gateway with failover and budget controls, and route your first request. +:page-topic-type: quickstart +:personas: app_developer, platform_admin +:learning-objective-1: Enable an LLM provider and create your first gateway +:learning-objective-2: Route your first request through AI Gateway and verify it works +:learning-objective-3: Verify request routing and token usage in the gateway overview + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +Redpanda AI Gateway keeps your AI-powered applications running and your costs under control by routing all LLM and MCP traffic through a single managed layer with automatic failover and budget enforcement. This quickstart walks you through configuring your first gateway and routing requests through it. + +== Prerequisites + +Before starting, ensure you have: + +* Access to the AI Gateway UI (provided by your administrator) +* Admin permissions to configure providers and models +* API key for at least one LLM provider (OpenAI, Anthropic, or Google AI) +* Python 3.8+, Node.js 18+, or cURL (for testing) + +== Configure a provider + +Providers represent upstream LLM services and their associated credentials. Providers are disabled by default and must be enabled explicitly. + +. Navigate to *Providers*. +. Select a provider (for example, OpenAI, Anthropic, Google AI). +. On the Configuration tab, click *Add configuration* and enter your API key. +. Verify the provider status shows "Active". + +== Enable models + +After enabling a provider, enable the specific models you want to make available through your gateways. + +. Navigate to *Models*. +. Enable the models you want to use (for example, `gpt-5.2-mini`, `claude-sonnet-4.5`, `claude-opus-4.6`). +. Verify the models appear as "Enabled" in the model catalog. + +TIP: Different providers have different reliability and cost characteristics. When choosing models, consider your use case requirements for quality, speed, and cost. + +=== Model naming convention + +Requests through AI Gateway must use the `vendor/model_id` format. For example: + +* OpenAI models: `openai/gpt-5.2`, `openai/gpt-5.2-mini` +* Anthropic models: `anthropic/claude-sonnet-4.5`, `anthropic/claude-opus-4.6` +* Google Gemini models: `google/gemini-2.0-flash`, `google/gemini-2.0-pro` + +This format allows the gateway to route requests to the correct provider. + +== Create a gateway + +A gateway is a logical configuration boundary that defines routing policies, rate limits, spend limits, and observability scope. Common gateway patterns include the following: + +* Environment separation: Create separate gateways for staging and production +* Team isolation: One gateway per team for budget tracking +* Customer multi-tenancy: One gateway per customer for isolated policies + +ifdef::ai-hub-available[] +[IMPORTANT] +==== +When creating a gateway, you choose between two modes: + +* *AI Hub Mode*: Zero-configuration with pre-configured routing and backend pools. Just add provider credentials and start routing requests. Ideal for quick starts and standard use cases. +* *Custom Mode*: Full control over all routing rules, backend pools, and policies. Requires manual configuration. Ideal for custom routing logic and specialized requirements. + +See xref:ai-gateway/gateway-modes.adoc[] to understand which mode fits your needs. This quickstart focuses on Custom mode configuration. +==== +endif::[] + +. Navigate to *Gateways*. +. Click *Create Gateway*. ++ +ifdef::ai-hub-available[] +. Select the gateway mode: +* *AI Hub*: Choose this for pre-configured intelligent routing (see xref:ai-gateway/admin/configure-ai-hub.adoc[] for setup) +* *Custom*: Choose this for full configuration control +endif::[] +. Configure the gateway: ++ +** Display name: Choose a descriptive name (for example, `my-first-gateway`) +** Workspace: Select a workspace (conceptually similar to a resource group) +** Description: Add context about this gateway's purpose +** Optional metadata for documentation + +After creation, copy the gateway endpoint from the overview page. You'll need this for sending requests. The gateway ID is embedded in the endpoint URL. For example: + +[source,bash] +---- +Endpoint: https://example/gateways/d633lffcc16s73ct95mg/v1 +Gateway ID: d633lffcc16s73ct95mg +---- + +== Send your first request + +Now that you've configured a provider and created a gateway, send a test request to verify everything works. + +[tabs] +==== +Python:: ++ +-- +[source,python] +---- +from openai import OpenAI + +client = OpenAI( + base_url="", + api_key="", # Or use gateway's auth +) + +response = client.chat.completions.create( + model="openai/gpt-5.2", # Use vendor/model format + messages=[ + {"role": "user", "content": "Hello!"} + ], +) + +print(response.choices[0].message.content) +---- + +Expected output: + +[source,text] +---- +Hello! How can I help you today? +---- +-- + +Node.js:: ++ +-- +[source,javascript] +---- +import OpenAI from 'openai'; + +const client = new OpenAI({ + baseURL: '', + apiKey: '', // Or use gateway's auth +}); + +const response = await client.chat.completions.create({ + model: 'anthropic/claude-sonnet-4-5-20250929', // Use vendor/model format + messages: [ + { role: 'user', content: 'Hello!' } + ], +}); + +console.log(response.choices[0].message.content); +---- + +Expected output: + +[source,text] +---- +Hello! How can I help you today? +---- +-- + +cURL:: ++ +-- +[source,bash] +---- +curl /chat/completions \ + -H "Content-Type: application/json" \ + -H "Authorization: Bearer " \ + -d '{ + "model": "openai/gpt-5.2", + "messages": [ + {"role": "user", "content": "Hello!"} + ] + }' +---- + +Expected output: + +[source,json] +---- +{ + "id": "chatcmpl-abc123", + "object": "chat.completion", + "model": "openai/gpt-5.2", + "choices": [ + { + "index": 0, + "message": { + "role": "assistant", + "content": "Hello! How can I help you today?" + }, + "finish_reason": "stop" + } + ], + "usage": { + "prompt_tokens": 9, + "completion_tokens": 9, + "total_tokens": 18 + } +} +---- +-- +==== + +=== Troubleshooting + +If your request fails, check these common issues: + +* 401 Unauthorized: Verify your API key is valid +* 404 Not Found: Confirm the base URL matches your gateway endpoint +* Model not found: Ensure the model is enabled in the model catalog and that you're using the correct `vendor/model` format. + +== Verify in the gateway overview + +Confirm your request was routed through AI Gateway. + +. On the *Overview* tab, check the aggregate metrics: ++ +* *Total Requests*: Should have incremented +* *Total Tokens*: Shows combined input and output tokens +* *Total Cost*: Estimated spend across all requests +* *Avg Latency*: Average response time in milliseconds + +. Scroll to the *Models* table to see per-model statistics: ++ +The model you used in your request should appear with its request count, token usage (input/output), estimated cost, latency, and error rate. + +== Configure LLM routing (optional) + +Configure rate limits, spend limits, and provider pools with failover. + +On the Gateways page, select the *LLM* tab to configure routing policies. The LLM routing pipeline represents the request lifecycle: + +. *Rate Limit*: Control request throughput (for example, 100 requests/second) +. *Spend Limit*: Set monthly budget caps (for example, $15K/month with blocking enforcement) +. *Provider Pools*: Define primary and fallback providers + +=== Configure provider pool with fallback + +For high availability, configure a fallback provider that activates when the primary fails: + +. Add a second provider (for example, Anthropic). +. In your gateway's *LLM* routing configuration: ++ +* *Primary pool*: OpenAI (preferred for quality) +* *Fallback pool*: Anthropic (activates on rate limits, timeouts, or errors) + +. Save the configuration. + +The gateway automatically routes to the fallback when it detects: + +* Rate limit exceeded +* Request timeout +* 5xx server errors from primary provider + +// Monitor the fallback rate in observability to detect primary provider issues early. + +== Configure MCP tools (optional) + +If you're using glossterm:AI agent[,AI agents], configure glossterm:MCP[,Model Context Protocol (MCP)] tool aggregation. + +On the Gateways page, select the *MCP* tab to configure tool discovery and execution. The MCP proxy aggregates multiple glossterm:MCP server[,MCP servers] behind a single endpoint, allowing agents to discover and call glossterm:MCP tool[,tools] through the gateway. + +Configure the MCP settings: + +* *Display name*: Descriptive name for the provider pool +* *Model*: Choose which model handles tool execution +* *Load balancing*: If multiple providers are available, select a strategy (for example, round robin) + +=== Available MCP tools + +The gateway provides these built-in MCP tools: + +* *Data catalog API*: Query your data catalog +* *Memory store*: Persistent storage for agent state +* *Vector search*: Semantic search over embeddings +* *MCP orchestrator*: Built-in tool for programmatic multi-tool workflows + +The *MCP orchestrator* enables agents to generate JavaScript code that calls multiple tools in a single orchestrated step, reducing round trips. For example, a workflow requiring 47 file reads can be reduced from 49 round trips to just 1. + +To add external tools (for example, Slack, GitHub), add their MCP server endpoints to your gateway configuration. + +=== Deferred tool loading + +When many tools are aggregated, listing all tools upfront can consume significant tokens. With deferred tool loading, the MCP gateway initially returns only: + +* A tool search capability +* The MCP orchestrator + +Agents then search for specific tools they need, retrieving only that subset. This can reduce token usage by 80-90% when you have many tools configured. + +// REVIEWERS: When/how exactly do you use the orchestrator? Also what happens after they create a gateway? Please provide an example of how to validate end-to-end routing against the gateway endpoint! + +// REVIEWERS: How do users connect to the ADP catalog + MCP servers exposed through RPCN? + +== Configure CEL routing rule (optional) + +Use CEL (Common Expression Language) expressions to route requests dynamically based on headers, content, or other request properties. + +The AI Gateway uses CEL for flexible routing without code changes. Use CEL to: + +* Route premium users to better models +* Apply different rate limits based on user tiers +* Enforce policies based on request content + +=== Add a routing rule + +In your gateway's routing configuration: + +. Add a CEL expression to route based on user tier: ++ +[source,cel] +---- +# Route based on user tier header +request.headers["x-user-tier"] == "premium" + ? "openai/gpt-5.2" + : "openai/gpt-5.2-mini" +---- + +. Save the rule. + +The gateway editor helps you discover available request fields (headers, path, body, and so on). + +=== Test the routing rule + +Send requests with different headers to verify routing: + +*Premium user request*: + +[source,python] +---- +response = client.chat.completions.create( + model="openai/gpt-5.2", # Will be routed based on CEL rule + messages=[{"role": "user", "content": "Hello"}], + extra_headers={"x-user-tier": "premium"} +) +# Should route to gpt-5.2 (premium model) +---- + +*Free user request*: + +[source,python] +---- +response = client.chat.completions.create( + model="openai/gpt-5.2-mini", + messages=[{"role": "user", "content": "Hello"}], + extra_headers={"x-user-tier": "free"} +) +# Should route to gpt-5.2-mini (cost-effective model) +---- + +// Check the observability dashboard to verify: +// +// * The correct model was selected based on the header value +// * The routing decision explanation shows which CEL rule matched + +=== Common CEL patterns + +Route based on model family: + +[source,cel] +---- +request.body.model.startsWith("anthropic/") +---- + +Apply a rule to all requests: + +[source,cel] +---- +true +---- + +Guard for field existence: + +[source,cel] +---- +has(request.body.max_tokens) && request.body.max_tokens > 1000 +---- + +For more CEL examples, see xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[]. + +== Connect AI tools to your gateway + +The AI Gateway provides standardized endpoints that work with various AI development tools. This section shows how to configure popular tools. + +=== MCP endpoint + +If you've configured MCP tools in your gateway, AI agents can connect to the aggregated MCP endpoint: + +* *MCP endpoint URL*: `/mcp` +* *Required headers*: +** `Authorization: Bearer ` + +This endpoint aggregates all MCP servers configured in your gateway. + +=== Environment variables + +For consistent configuration, set these environment variables: + +[source,bash] +---- +export REDPANDA_GATEWAY_URL="" +export REDPANDA_API_KEY="" +---- + +=== Claude Code + +Configure Claude Code using HTTP transport for the MCP connection: + +[source,bash] +---- +claude mcp add --transport http redpanda-aigateway /mcp \ + --header "Authorization: Bearer " +---- + +Alternatively, edit `~/.claude/config.json`: + +[source,json] +---- +{ + "mcpServers": { + "redpanda-ai-gateway": { + "transport": "http", + "url": "/mcp", + "headers": { + "Authorization": "Bearer " + } + } + }, + "apiProviders": { + "redpanda": { + "baseURL": "" + } + } +} +---- + +ifdef::integrations-available[] +For detailed Claude Code setup, see xref:ai-agents:ai-gateway/integrations/claude-code-user.adoc[]. +endif::[] + +=== Continue.dev + +Edit your Continue config file (`~/.continue/config.json`): + +[source,json] +---- +{ + "models": [ + { + "title": "Redpanda AI Gateway - GPT-5.2", + "provider": "openai", + "model": "openai/gpt-5.2", + "apiBase": "", + "apiKey": "" + }, + { + "title": "Redpanda AI Gateway - Claude", + "provider": "anthropic", + "model": "anthropic/claude-sonnet-4.5", + "apiBase": "", + "apiKey": "" + }, + { + "title": "Redpanda AI Gateway - Gemini", + "provider": "google", + "model": "google/gemini-2.0-flash", + "apiBase": "", + "apiKey": "" + } + ] +} +---- + +ifdef::integrations-available[] +For detailed Continue setup, see xref:ai-agents:ai-gateway/integrations/continue-user.adoc[]. +endif::[] + +=== Cursor IDE + +Configure Cursor in Settings (*Cursor* → *Settings* or `Cmd+,`): + +[source,json] +---- +{ + "cursor.ai.providers.openai.apiBase": "" +} +---- + +ifdef::integrations-available[] +For detailed Cursor setup, see xref:ai-agents:ai-gateway/integrations/cursor-user.adoc[]. +endif::[] + +=== Custom applications + +For custom applications using OpenAI, Anthropic, or Google Gemini SDKs: + +*Python with OpenAI SDK*: + +[source,python] +---- +from openai import OpenAI + +client = OpenAI( + base_url="", + api_key="", +) +---- + +*Python with Anthropic SDK*: + +[source,python] +---- +from anthropic import Anthropic + +client = Anthropic( + base_url="", + api_key="", +) +---- + +*Node.js with OpenAI SDK*: + +[source,javascript] +---- +import OpenAI from 'openai'; + +const openai = new OpenAI({ + baseURL: '', + apiKey: process.env.REDPANDA_API_KEY, +}); +---- + +== Next steps + +Explore advanced AI Gateway features: + +* xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[]: Advanced CEL routing patterns for traffic distribution and cost optimization +* xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[]: Configure MCP server aggregation and deferred tool loading +ifdef::integrations-available[] +* xref:ai-agents:ai-gateway/integrations/index.adoc[]: Connect more AI development tools +endif::[] + +Learn about the architecture: + +* xref:ai-agents:ai-gateway/gateway-architecture.adoc[]: Technical architecture, request lifecycle, and deployment models +* xref:ai-agents:ai-gateway/what-is-ai-gateway.adoc[]: Problems AI Gateway solves and common use cases diff --git a/modules/ai-agents/pages/ai-gateway/index.adoc b/modules/ai-agents/pages/ai-gateway/index.adoc new file mode 100644 index 000000000..5c306c3d2 --- /dev/null +++ b/modules/ai-agents/pages/ai-gateway/index.adoc @@ -0,0 +1,6 @@ += AI Gateway +:description: Keep AI-powered apps running with automatic provider failover, prevent runaway spend with centralized budget controls, and govern access across teams, apps, and service accounts. +:page-layout: index +:personas: platform_admin, app_developer, evaluator + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] \ No newline at end of file diff --git a/modules/ai-agents/pages/ai-gateway/mcp-aggregation-guide.adoc b/modules/ai-agents/pages/ai-gateway/mcp-aggregation-guide.adoc new file mode 100644 index 000000000..279c0e974 --- /dev/null +++ b/modules/ai-agents/pages/ai-gateway/mcp-aggregation-guide.adoc @@ -0,0 +1,1006 @@ += MCP Gateway +:description: Learn how to use the MCP Gateway to aggregate MCP servers, configure deferred tool loading, create orchestrator workflows, and manage security. +:page-topic-type: guide +:personas: app_developer, platform_admin +:learning-objective-1: Configure MCP aggregation with deferred tool loading to reduce token costs +:learning-objective-2: Write orchestrator workflows to reduce multi-step interactions +:learning-objective-3: Manage approved MCP servers with security controls and audit trails + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +The MCP Gateway provides glossterm:MCP[,Model Context Protocol (MCP)] aggregation, allowing glossterm:AI agent[,AI agents] to access glossterm:MCP tool[,tools] from multiple MCP servers through a single unified endpoint. This eliminates the need for agents to manage multiple MCP connections and significantly reduces token costs through deferred tool loading. + +MCP Gateway benefits: + +* Single endpoint: One MCP endpoint aggregates all approved MCP servers +* Token reduction: Often 80-90% fewer tokens through deferred tool loading (depending on configuration) +* Centralized governance: Admin-approved MCP servers only +* Orchestration: JavaScript-based orchestrator reduces multi-step round trips +* Security: Controlled tool execution environment + +== What is MCP? + +glossterm:MCP[,Model Context Protocol (MCP)] is a standard for exposing tools (functions) that AI agents can discover and invoke. MCP servers provide tools like: + +* Database queries +* File system operations +* API integrations (CRM, payment, analytics) +* Search (web, vector, enterprise) +* Code execution +* Workflow automation + +[cols="1,1"] +|=== +| Without AI Gateway | With AI Gateway + +| Agent connects to each MCP server individually +| Agent connects to gateway's unified `/mcp` endpoint + +| Agent loads ALL tools from ALL servers upfront (high token cost) +| Gateway aggregates tools from approved MCP servers + +| No centralized governance or security +| Deferred loading: Only search + orchestrator tools sent initially + +| Complex configuration +| Agent queries for specific tools when needed (token savings) + +| +| Centralized governance and observability +|=== + +== Architecture + +[source,text] +---- +┌─────────────────┐ +│ AI Agent │ +│ (Claude, GPT) │ +└────────┬────────┘ + │ + │ 1. Discover tools via /mcp endpoint + │ 2. Invoke specific tool + │ +┌────────▼────────────────────────────────┐ +│ AI Gateway (MCP Aggregator) │ +│ │ +│ ┌─────────────────────────────────┐ │ +│ │ Deferred Tool Loading │ │ +│ │ (Send search + orchestrator │ │ +│ │ initially, defer others) │ │ +│ └─────────────────────────────────┘ │ +│ │ +│ ┌─────────────────────────────────┐ │ +│ │ Orchestrator (JavaScript) │ │ +│ │ (Reduce round trips for │ │ +│ │ multi-step workflows) │ │ +│ └─────────────────────────────────┘ │ +│ │ +│ ┌─────────────────────────────────┐ │ +│ │ Approved MCP Server Registry │ │ +│ │ (Admin-controlled) │ │ +│ └─────────────────────────────────┘ │ +└────────┬────────────────────────────────┘ + │ + │ Routes to appropriate MCP server + │ + ┌────▼─────┬──────────┬─────────┐ + │ │ │ │ +┌───▼────┐ ┌──▼─────┐ ┌──▼──────┐ ┌▼──────┐ +│ MCP │ │ MCP │ │ MCP │ │ MCP │ +│Database│ │Filesystem│ │ Slack │ │Search │ +│Server │ │ Server │ │ Server │ │Server │ +└────────┘ └────────┘ └─────────┘ └───────┘ +---- + + +== MCP request lifecycle + +=== Tool discovery (initial connection) + +Agent request: + +[source,http] +---- +GET /mcp/tools +Headers: + Authorization: Bearer {TOKEN} + rp-aigw-mcp-deferred: true # Enable deferred loading +---- + + +Gateway response (with deferred loading): + +[source,json] +---- +{ + "tools": [ + { + "name": "search_tools", + "description": "Query available tools by keyword or category", + "input_schema": { + "type": "object", + "properties": { + "query": {"type": "string"}, + "category": {"type": "string"} + } + } + }, + { + "name": "orchestrator", + "description": "Execute multi-step workflows with JavaScript logic", + "input_schema": { + "type": "object", + "properties": { + "workflow": {"type": "string"}, + "context": {"type": "object"} + } + } + } + ] +} +---- + + +Note: Only 2 tools returned initially (search + orchestrator), not all 50+ tools from all MCP servers. + +Token savings: + +* Without deferred loading: ~5,000-10,000 tokens (all tool definitions) +* With deferred loading: ~500-1,000 tokens (2 tool definitions) +* Typically 80-90% reduction + +=== Tool query (when agent needs specific tool) + +Agent request: + +[source,http] +---- +POST /mcp/tools/search_tools +Headers: + Authorization: Bearer {TOKEN} +Body: +{ + "query": "database query" +} +---- + + +Gateway response: + +[source,json] +---- +{ + "tools": [ + { + "name": "execute_sql", + "description": "Execute SQL query against the database", + "mcp_server": "database-server", + "input_schema": { + "type": "object", + "properties": { + "query": {"type": "string"}, + "database": {"type": "string"} + }, + "required": ["query"] + } + }, + { + "name": "list_tables", + "description": "List all tables in the database", + "mcp_server": "database-server", + "input_schema": { + "type": "object", + "properties": { + "database": {"type": "string"} + } + } + } + ] +} +---- + + +Agent receives only relevant tools based on query. + +=== Tool execution + +Agent request: + +[source,http] +---- +POST /mcp/tools/execute_sql +Headers: + Authorization: Bearer {TOKEN} +Body: +{ + "query": "SELECT * FROM users WHERE tier = 'premium' LIMIT 10", + "database": "prod" +} +---- + + +Gateway: + +1. Routes to appropriate MCP server (database-server) +2. Executes tool +3. Returns result + +Gateway response: + +[source,json] +---- +{ + "result": [ + {"id": 1, "name": "Alice", "tier": "premium"}, + {"id": 2, "name": "Bob", "tier": "premium"}, + ... + ] +} +---- + + +Agent receives result and can continue reasoning. + +== Deferred tool loading + +=== How it works + +Traditional MCP (No deferred loading): + +1. Agent connects to MCP endpoint +2. Gateway sends ALL tools from ALL MCP servers (50+ tools) +3. Agent includes ALL tool definitions in EVERY LLM request +4. High token cost: ~5,000-10,000 tokens per request + +Deferred loading (AI Gateway): + +1. Agent connects to MCP endpoint with `rp-aigw-mcp-deferred: true` header +2. Gateway sends only 2 tools: `search_tools` + `orchestrator` +3. Agent includes only 2 tool definitions in LLM request (~500-1,000 tokens) +4. When agent needs specific tool: + * Agent calls `search_tools` with query (e.g., "database") + * Gateway returns matching tools + * Agent calls specific tool (e.g., `execute_sql`) +5. Total token cost: Initial 500-1,000 + per-query ~200-500 + * Often 80-90% lower than loading all tools + +=== When to use deferred loading + +Use deferred loading when: + +* You have 10+ tools across multiple MCP servers +* Agents don't need all tools for every request +* Token costs are a concern +* Agents can handle multi-step workflows (search → execute) + +Don't use deferred loading when: + +* You have <5 tools total (overhead not worth it) +* Agents need all tools for every request (rare) +* Latency is more important than token costs (deferred adds 1 round trip) + +=== Configure deferred loading + +Deferred loading is configured per MCP server through the *Defer Loading Override* setting in the Create MCP Server dialog. + +. Navigate to your gateway's *MCP* tab. +. Create or edit an MCP server. +. Under *Server Settings*, set *Defer Loading Override*: ++ +[cols="1,2"] +|=== +|Option |Description + +|Inherit from gateway +|Use the gateway-level deferred loading setting (default) + +|Enabled +|Always defer loading from this server. Agents receive only a search tool initially and query for specific tools when needed. + +|Disabled +|Always load all tools from this server upfront. +|=== + +. Click *Save*. + + +=== Measure token savings + +Compare token usage before/after deferred loading: + +1. Check logs without deferred loading: + + * Filter: Gateway = your-gateway, Model = your-model, Date = before enabling + * Note the average tokens per request + +2. Enable deferred loading + +3. Check logs after deferred loading: + + * Filter: Same gateway/model, Date = after enabling + * Note the average tokens per request + +4. Calculate savings: ++ +[source,text] +---- +Savings % = ((Before - After) / Before) × 100 +---- + +Expected Results: Typically 80-90% reduction in average tokens per request + +== Orchestrator: multi-step workflows + +=== What is the orchestrator? + +The *orchestrator* is a special tool that executes JavaScript workflows, reducing multi-step interactions from multiple round trips to a single request. + +Without Orchestrator: + +1. Agent: "Search vector database for relevant docs" → Round trip 1 +2. Agent receives results, evaluates: "Results insufficient" +3. Agent: "Fallback to web search" → Round trip 2 +4. Agent receives results, processes → Round trip 3 +5. *Total: 3 round trips* (high latency, 3× token cost) + +With Orchestrator: + +1. Agent: "Execute workflow: Search vector DB → if insufficient, fallback to web search" +2. Gateway executes entire workflow in JavaScript +3. Agent receives final result → *1 round trip* + +Benefits: + +* *Latency Reduction*: 1 round trip vs 3+ +* *Token Reduction*: No intermediate LLM calls needed +* *Reliability*: Workflow logic executes deterministically +* *Cost*: Single LLM call instead of multiple + +=== When to use orchestrator + +Use orchestrator when: + +* Multi-step workflows with conditional logic (if/else) +* Fallback patterns (try A, if fails, try B) +* Sequential tool calls with dependencies +* Loop-based operations (iterate, aggregate) + +Don't use orchestrator when: + +* Single tool call (no benefit) +* Agent needs to reason between steps (orchestrator is deterministic) +* Workflow requires LLM judgment at each step + +=== Orchestrator example: search with fallback + +Scenario: Search vector database; if results insufficient, fallback to web search. + +Without Orchestrator (3 round trips): + +[source,python] +---- +# Agent's internal reasoning (3 separate LLM calls) + +# Round trip 1: Search vector DB +vector_results = call_tool("vector_search", {"query": "Redpanda pricing"}) + +# Round trip 2: Agent evaluates results +if len(vector_results) < 3: + # Round trip 3: Fallback to web search + web_results = call_tool("web_search", {"query": "Redpanda pricing"}) + results = web_results +else: + results = vector_results + +# Agent processes final results +---- + + +With Orchestrator (1 round trip): + +[source,python] +---- +# Agent invokes orchestrator once +results = call_tool("orchestrator", { + "workflow": """ + // JavaScript workflow + const vectorResults = await tools.vector_search({ + query: context.query + }); + + if (vectorResults.length < 3) { + // Fallback to web search + const webResults = await tools.web_search({ + query: context.query + }); + return webResults; + } + + return vectorResults; + """, + "context": { + "query": "Redpanda pricing" + } +}) + +# Agent receives final results directly +---- + + +Savings: + +* Latency: ~3-5 seconds (3 round trips) → ~1-2 seconds (1 round trip) +* Tokens: ~1,500 tokens (3 LLM calls) → ~500 tokens (1 LLM call) +* Cost: ~$0.0075 → ~$0.0025 (67% reduction) + +=== Orchestrator API + +// PLACEHOLDER: Confirm orchestrator API details + +Tool name: `orchestrator` + +Input schema: + +[source,json] +---- +{ + "workflow": "string (JavaScript code)", + "context": "object (variables available to workflow)" +} +---- + + +Available in workflow: + +* `tools.{tool_name}(params)`: Call any tool from approved MCP servers +* `context.{variable}`: Access context variables +* Standard JavaScript: `if`, `for`, `while`, `try/catch`, `async/await` + +Security: + +* Sandboxed execution (no file system, network, or system access) +* Timeout and memory limits are system-managed and cannot be modified + +Limitations: + +* Cannot call external APIs directly (must use MCP tools) +* Cannot import npm packages (built-in JS only) + +=== Orchestrator example: data aggregation + +Scenario: Fetch user data from database, calculate summary statistics. + +[source,python] +---- +results = call_tool("orchestrator", { + "workflow": """ + // Fetch all premium users + const users = await tools.execute_sql({ + query: "SELECT * FROM users WHERE tier = 'premium'", + database: "prod" + }); + + // Calculate statistics + const stats = { + total: users.length, + by_region: {}, + avg_spend: 0 + }; + + let totalSpend = 0; + for (const user of users) { + // Count by region + if (!stats.by_region[user.region]) { + stats.by_region[user.region] = 0; + } + stats.by_region[user.region]++; + + // Sum spend + totalSpend += user.monthly_spend; + } + + stats.avg_spend = totalSpend / users.length; + + return stats; + """, + "context": {} +}) +---- + + +Output: + +[source,json] +---- +{ + "total": 1250, + "by_region": { + "us-east": 600, + "us-west": 400, + "eu": 250 + }, + "avg_spend": 149.50 +} +---- + + +vs Without Orchestrator: + +* Would require fetching all users to agent → agent processes → 2 round trips +* Orchestrator: All processing in gateway → 1 round trip + +=== Orchestrator best practices + +DO: + +* Use for deterministic workflows (same input → same output) +* Use for sequential operations with dependencies +* Use for fallback patterns +* Handle errors with `try/catch` +* Keep workflows readable (add comments) + +DON'T: + +* Use for workflows requiring LLM reasoning at each step (let agent handle that) +* Execute long-running operations (timeout will hit) +* Access external resources (use MCP tools instead) +* Execute untrusted user input (security risk) + +== MCP server administration + +=== Add MCP servers + +Prerequisites: + +* MCP server URL +* Authentication method (if required) +* List of tools to enable + +Steps: + +1. Navigate to MCP servers: + + * In the sidebar, navigate to *Agentic AI > Gateways*, select your gateway, then select the *MCP* tab. + +2. Configure server: ++ +[source,yaml] +---- +# PLACEHOLDER: Actual configuration format +name: database-server +url: https://mcp-database.example.com +authentication: + type: bearer_token + token: ${SECRET_REF} # Reference to secret +enabled_tools: + * execute_sql + * list_tables + * describe_table +---- + +3. Test connection: + + * Gateway attempts connection to MCP server + * Verifies authentication + * Retrieves tool list + +4. Enable server: + + * Server status: Active + * Tools available to agents + +Common MCP servers: + +* Database: PostgreSQL, MySQL, MongoDB query tools +* Filesystem: Read/write/search files +* API Integrations: Slack, GitHub, Salesforce, Stripe +* Search: Web search, vector search, enterprise search +* Code Execution: Python, JavaScript sandboxes +* Workflow: Zapier, n8n integrations + +=== MCP server approval workflow + +Why approval is required: + +* Security: Prevent agents from accessing unauthorized systems +* Governance: Control which tools are available +* Cost: Some tools are expensive (API calls, compute) +* Compliance: Audit trail of approved tools + +Typical approval process: + +1. Request: User/team requests MCP server +2. Review: Admin reviews security, cost, necessity +3. Approval/Rejection: Admin decision +4. Configuration: If approved, admin adds server to gateway + +NOTE: The exact approval workflow may vary by organization. In some cases, admins may directly enable servers without a formal workflow. + +Rejected server behavior: + +* Server not listed in tool discovery +* Agent cannot query or invoke tools from this server +* Requests return `403 Forbidden` + +=== Restrict MCP server access + +Per-gateway restrictions: + +[source,yaml] +---- +# PLACEHOLDER: Actual configuration format +gateways: + - name: production-gateway + mcp_servers: + allowed: + - database-server # Only this server allowed + denied: + - filesystem-server # Explicitly denied + + - name: staging-gateway + mcp_servers: + allowed: + - "*" # All approved servers allowed +---- + + +Use cases: + +* Production gateway: Only production-safe tools +* Staging gateway: All tools for testing +* Customer-specific gateway: Only tools relevant to customer + +=== MCP server versioning + +Challenge: MCP server updates may change tool schemas. + +Best practices for version management: + +1. Pin versions (if supported): ++ +[source,yaml] +---- +mcp_servers: + * name: database-server + version: "1.2.3" # Pin to specific version +---- + +2. Test in staging first: + + * Update MCP server in staging gateway + * Test agent workflows + * Promote to production when validated + +3. Monitor breaking changes: + + * Subscribe to MCP server changelogs + * Set up alerts for schema changes + +== MCP observability + +=== Logs + +MCP tool invocations appear in request logs with: + +* Tool name +* MCP server +* Input parameters +* Output result +* Execution time +* Errors (if any) + +Filter logs by MCP: + +[source,text] +---- +Filter: request.path.startsWith("/mcp") +---- + + +Common log fields: + +[cols="1,2,2"] +|=== +| Field | Description | Example + +| Tool +| Tool invoked +| `execute_sql` + +| MCP Server +| Which server handled it +| `database-server` + +| Input +| Parameters sent +| `{"query": "SELECT ..."}` + +| Output +| Result returned +| `[{"id": 1, ...}]` + +| Latency +| Tool execution time +| `250ms` + +| Status +| Success/failure +| `200`, `500` +|=== + +=== Metrics + +The following MCP-specific metrics may be available depending on your gateway configuration: + +* MCP requests per second +* Tool invocation count (by tool, by MCP server) +* MCP latency (p50, p95, p99) +* MCP error rate (by server, by tool) +* Orchestrator execution count +* Orchestrator execution time + +Dashboard: MCP Analytics + +* Top tools by usage +* Top MCP servers by latency +* Error rate by MCP server +* Token savings from deferred loading + +=== Debug MCP issues + +Issue: "Tool not found" + +Possible causes: + +1. MCP server not added to gateway +2. Tool not enabled in MCP server configuration +3. Deferred loading enabled but agent didn't query for tool first + +Solution: + +1. Verify MCP server is active in the Cloud console +2. Verify tool is in enabled_tools list +3. If deferred loading: Agent must call `search_tools` first + +Issue: "MCP server timeout" + +Possible causes: + +1. MCP server is down/unreachable +2. Tool execution is slow (e.g., expensive database query) +3. Gateway timeout too short + +Solution: + +1. Check MCP server health +2. Optimize tool (e.g., add database index) +3. Contact support if you need to adjust timeout limits + +Issue: "Orchestrator workflow failed" + +Possible causes: + +1. JavaScript syntax error +2. Tool invocation failed inside workflow +3. Timeout exceeded +4. Memory limit exceeded + +Solution: + +1. Test workflow syntax in JavaScript playground +2. Check logs for tool error inside orchestrator +3. Simplify workflow or increase timeout +4. Reduce data processing in workflow + +== Security considerations + +//// +=== Tool execution sandboxing + +// PLACEHOLDER: Confirm sandboxing implementation + +Orchestrator sandbox: + +* No file system access +* No network access (except via MCP tools) +* No system calls +* Memory limit: // PLACEHOLDER: e.g., 128MB +* Execution timeout: // PLACEHOLDER: e.g., 30s + +MCP tool execution: + +* Tools execute in MCP server's environment (not gateway) +* Gateway does not execute tool code (only proxies requests) +* Security is MCP server's responsibility +//// + +=== Authentication + +Gateway → MCP server: + +* Bearer token (most common) +* API key +* mTLS (for high-security environments) + +Agent → Gateway: + +* Standard gateway authentication (Redpanda Cloud token) +* Gateway endpoint URL identifies the gateway (and its approved MCP servers) + +=== Audit trail + +All MCP operations logged: + +* Who (agent/user) invoked tool +* When (timestamp) +* What tool was invoked +* What parameters were sent +* What result was returned +* Whether it succeeded or failed + +Use case: Compliance, security investigation, debugging + +=== Restrict dangerous tools + +Recommendation: Don't enable destructive tools in production gateways + +Examples of dangerous tools*: + +* File deletion (`delete_file`) +* Database writes without safeguards (`execute_sql` with UPDATE/DELETE) +* Payment operations (`charge_customer`) +* System commands (`execute_bash`) + +Best practice: + +* Read-only tools in production gateway +* Write tools only in staging gateway (with approval workflows) +* Wrap dangerous operations in MCP server with safeguards (e.g., "require confirmation token") + +== MCP + LLM routing + +=== Combine MCP with CEL routing + +Use case: Route agents to different MCP servers based on customer tier + +CEL expression: + +[source,cel] +---- +request.headers["x-customer-tier"] == "enterprise" + ? "gateway-with-premium-mcp-servers" + : "gateway-with-basic-mcp-servers" +---- + + +Result: + +* Enterprise customers: Access to proprietary data, expensive APIs +* Basic customers: Access to public data, free APIs + +=== MCP with provider pools + +Scenario: Different agents use different models + different tools + +Configuration: + +* Gateway A: GPT-5.2 + database + CRM MCP servers +* Gateway B: Claude Sonnet + web search + analytics MCP servers + +Use case: Optimize model-tool pairing (some models better at certain tools) + +== Integration examples + +[tabs] +==== +Python (OpenAI SDK):: ++ +-- +[source,python] +---- +from openai import OpenAI + +# Initialize client with MCP endpoint +client = OpenAI( + base_url=os.getenv("GATEWAY_ENDPOINT"), + api_key=os.getenv("REDPANDA_CLOUD_TOKEN"), + default_headers={ + "rp-aigw-mcp-deferred": "true" # Enable deferred loading + } +) + +# Discover tools +tools_response = requests.get( + f"{os.getenv('GATEWAY_ENDPOINT')}/mcp/tools", + headers={ + "Authorization": f"Bearer {os.getenv('REDPANDA_CLOUD_TOKEN')}", + "rp-aigw-mcp-deferred": "true" + } +) +tools = tools_response.json()["tools"] + +# Agent uses tools +response = client.chat.completions.create( + model="anthropic/claude-sonnet-4.5", + messages=[ + {"role": "user", "content": "Query the database for premium users"} + ], + tools=tools, # Pass MCP tools to agent + tool_choice="auto" +) + +# Handle tool calls +if response.choices[0].message.tool_calls: + for tool_call in response.choices[0].message.tool_calls: + # Execute tool via gateway + tool_result = requests.post( + f"{os.getenv('GATEWAY_ENDPOINT')}/mcp/tools/{tool_call.function.name}", + headers={ + "Authorization": f"Bearer {os.getenv('REDPANDA_CLOUD_TOKEN')}", + }, + json=json.loads(tool_call.function.arguments) + ) + + # Continue conversation with tool result + response = client.chat.completions.create( + model="anthropic/claude-sonnet-4.5", + messages=[ + {"role": "user", "content": "Query the database for premium users"}, + response.choices[0].message, + { + "role": "tool", + "tool_call_id": tool_call.id, + "content": json.dumps(tool_result.json()) + } + ] + ) +---- +-- + +Claude Code CLI:: ++ +-- +[source,bash] +---- +# Configure gateway with MCP +export CLAUDE_API_BASE="https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1" +export ANTHROPIC_API_KEY="your-redpanda-token" + +# Claude Code automatically discovers MCP tools from gateway +claude code + +# Agent can now use aggregated MCP tools +---- +-- + +LangChain:: ++ +-- +[source,python] +---- +from langchain_openai import ChatOpenAI +from langchain.agents import initialize_agent, Tool + +# Initialize LLM with gateway +llm = ChatOpenAI( + base_url=os.getenv("GATEWAY_ENDPOINT"), + api_key=os.getenv("REDPANDA_CLOUD_TOKEN"), +) + +# Fetch MCP tools from gateway +# PLACEHOLDER: LangChain-specific integration code + +# Create agent with MCP tools +agent = initialize_agent( + tools=mcp_tools, + llm=llm, + agent="openai-tools", + verbose=True +) + +# Agent can now use MCP tools +response = agent.run("Find all premium users in the database") +---- +-- +==== diff --git a/modules/ai-agents/pages/ai-gateway/what-is-ai-gateway.adoc b/modules/ai-agents/pages/ai-gateway/what-is-ai-gateway.adoc new file mode 100644 index 000000000..5f001a435 --- /dev/null +++ b/modules/ai-agents/pages/ai-gateway/what-is-ai-gateway.adoc @@ -0,0 +1,204 @@ += What is an AI Gateway? +:description: Understand how AI Gateway keeps AI-powered apps highly available across providers and prevents runaway AI spend with centralized cost governance. +:page-topic-type: concept +:personas: app_developer, platform_admin +:learning-objective-1: Explain how AI Gateway keeps AI-powered apps highly available through governed provider failover +:learning-objective-2: Describe how AI Gateway prevents runaway AI spend with centralized budget controls and tenancy-based governance +:learning-objective-3: Identify when AI Gateway fits your use case based on availability requirements, cost governance needs, and multi-provider or MCP tool usage + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +Redpanda AI Gateway keeps your AI-powered applications highly available and your AI spend under control. It sits between your applications and the LLM providers and AI tools they depend on, providing automatic provider failover so your apps stay up even when a provider goes down, and centralized budget controls so costs never run away. For platform teams, it adds governance at the model-fallback level, tenancy modeling for teams, individuals, apps, and service accounts, and a single proxy layer for both LLM models and MCP tool servers. + +== The problem + +Modern AI applications face two business-critical challenges: staying up and staying on budget. + +First, applications typically hardcode provider-specific SDKs. An application using OpenAI's SDK cannot easily switch to Anthropic or Google without code changes and redeployment. When a provider hits rate limits, suffers an outage, or degrades in performance, your application goes down with it. Your end users don't care which provider you use; they care that the app works. + +Second, costs can spiral without centralized controls. Without a single view of token consumption across teams and applications, it's difficult to attribute costs to specific customers, features, or environments. Testing and debugging can generate unexpected bills, and there's no way to enforce budgets or rate limits per team, application, or service account. The result: runaway spend that finance discovers only after the fact. + +These two challenges are compounded by fragmented observability across provider dashboards, which makes it harder to detect availability issues or cost anomalies in time to act. And as organizations adopt glossterm:AI agent[,AI agents] that call glossterm:MCP tool[,MCP tools], the lack of centralized tool governance adds another dimension of uncontrolled cost and risk. + +== What AI Gateway solves + +Redpanda AI Gateway delivers two core business outcomes, high availability and cost governance, backed by platform-level controls that set it apart from simple proxy layers: + +=== High availability through governed failover + +Your end users don't care whether you use OpenAI, Anthropic, or Google; they care that your app stays up. AI Gateway lets you configure provider pools with automatic failover so that when your primary provider hits rate limits, times out, or returns errors, the gateway routes requests to a fallback provider with no code changes and no downtime for your users. + +Unlike simple retry logic, AI Gateway provides governance at the failover level: you define which providers fail over to which, under what conditions, and with what priority. This controlled failover can significantly improve uptime even during extended provider outages. + +=== Cost governance and budget controls + +AI Gateway gives you centralized fiscal control over AI spend. Set monthly budget caps per gateway, enforce them automatically, and set rate limits per team, environment, or application. No more runaway costs discovered after the fact. + +You can route requests to different models based on user attributes. For example, to direct premium users to a more capable model while routing free tier users to a cost-effective option, use a CEL expression: + +[source,cel] +---- +// Route premium users to best model, free users to cost-effective model +request.headers["x-user-tier"] == "premium" + ? "anthropic/claude-opus-4.6" + : "anthropic/claude-sonnet-4.5" +---- + +You can also set different rate limits and spend limits per environment to prevent staging or development traffic from consuming production budgets. + +=== Tenancy and access governance + +AI Gateway provides multi-tenant isolation by design. Create separate gateways for teams, individual developers, applications, or service accounts, each with their own budgets, rate limits, routing policies, and observability scope. This tenancy model lets platform teams govern who uses what, how much they spend, and which models and tools they can access, without building custom authorization layers. + +=== Unified LLM access (single endpoint for all providers) + +AI Gateway provides a single OpenAI-compatible endpoint that routes requests to multiple LLM providers. Instead of integrating with each provider's SDK separately, you configure your application once and switch providers by changing only the model parameter. + +Without AI Gateway, you need different SDKs and patterns for each provider: + +[source,python] +---- +# OpenAI +from openai import OpenAI +client = OpenAI(api_key="sk-...") +response = client.chat.completions.create( + model="gpt-5.2", + messages=[{"role": "user", "content": "Hello"}] +) + +# Anthropic (different SDK, different patterns) +from anthropic import Anthropic +client = Anthropic(api_key="sk-ant-...") +response = client.messages.create( + model="claude-sonnet-4.5", + max_tokens=1024, + messages=[{"role": "user", "content": "Hello"}] +) +---- + +With AI Gateway, you use the OpenAI SDK for all providers: + +[source,python] +---- +from openai import OpenAI + +# Single configuration, multiple providers +client = OpenAI( + base_url="", + api_key="your-redpanda-token", +) + +# Route to OpenAI +response = client.chat.completions.create( + model="openai/gpt-5.2", + messages=[{"role": "user", "content": "Hello"}] +) + +# Route to Anthropic (same code, different model string) +response = client.chat.completions.create( + model="anthropic/claude-sonnet-4.5", + messages=[{"role": "user", "content": "Hello"}] +) + +# Route to Google Gemini (same code, different model string) +response = client.chat.completions.create( + model="google/gemini-2.0-flash", + messages=[{"role": "user", "content": "Hello"}] +) +---- + +To switch providers, you change only the `model` parameter from `openai/gpt-5.2` to `anthropic/claude-sonnet-4.5`. No code changes or redeployment needed. + +=== Proxy for LLM models and MCP tool servers + +AI Gateway acts as a single proxy layer for both LLM model requests and MCP tool servers. For LLM traffic, it provides the unified endpoint described above. For AI agents that use MCP tools, it aggregates multiple MCP servers and provides deferred tool loading, which dramatically reduces token costs. + +Without AI Gateway, agents typically load all available glossterm:MCP tool[,tools] from multiple MCP servers at startup. This approach sends 50+ tool definitions with every request, creating high token costs (thousands of tokens per request), slow agent startup times, and no centralized governance over which tools agents can access. + +With AI Gateway, you configure approved MCP servers once, and the gateway loads only search and orchestrator tools initially. Agents query for specific tools only when needed, which often reduces token usage by 80-90% depending on your configuration and the number of tools aggregated. You also gain centralized approval and governance over which MCP servers your agents can access. + +For complex workflows, AI Gateway provides a JavaScript-based orchestrator tool that reduces multi-step workflows from multiple round trips to a single call. For example, you can create a workflow that searches a vector database and, if the results are insufficient, falls back to web search—all in one orchestration step. + +=== Unified observability and cost tracking + +AI Gateway provides a single dashboard that tracks all LLM traffic across providers, eliminating the need to switch between multiple provider dashboards. + +The dashboard tracks request volume per gateway, model, and provider, along with token usage for both prompt and completion tokens. You can view estimated spend per model with cross-provider comparisons, latency metrics (p50, p95, p99), and errors broken down by type, provider, and model. + +This unified view helps you answer critical questions such as which model is the most cost-effective for your use case, why a specific user request failed, how much your staging environment costs per week, and what the latency difference is between providers for your workload. + +ifdef::ai-hub-available[] +== Gateway modes + +AI Gateway supports two modes to accommodate different organizational needs: + +*AI Hub Mode* provides zero-configuration access with pre-configured backend pools and intelligent routing. Platform admins simply add provider credentials (OpenAI, Anthropic, Google Gemini), and all teams immediately benefit from 17 routing rules and 6 backend pools. Users can toggle preferences like vision routing or long-context routing, but the underlying architecture is managed by Redpanda. This mode eliminates the complexity of LLM gateway configuration—IT adds API keys once, all teams benefit immediately. + +*Custom Mode* provides full control over routing rules, backend pools, rate limits, and policies. Admins configure every aspect of the gateway to meet specific requirements. This mode is ideal when you need custom routing logic based business rules, specific failover behavior, or integration with custom infrastructure like Azure OpenAI or AWS Bedrock. + +To understand which mode fits your use case, see xref:ai-gateway/gateway-modes.adoc[]. +endif::[] + +== Common gateway patterns + +=== Team isolation + +When multiple teams share infrastructure but need separate budgets and policies, create one gateway per team. For example, you might configure Team A's gateway with a $5K/month budget for both staging and production environments, while Team B's gateway has a $10K/month budget with different rate limits. Each team sees only their own traffic in the observability dashboards, providing clear cost attribution and isolation. + +=== Environment separation + +To prevent staging traffic from affecting production metrics, create separate gateways for each environment. Configure the staging gateway with lower rate limits, restricted model access, and aggressive cost controls to prevent runaway expenses. The production gateway can have higher rate limits, access to all models, and alerting configured to detect anomalies. + +=== Primary and fallback for reliability + +To ensure uptime during provider outages, configure provider pools with automatic failover. For example, you can set OpenAI as your primary provider (preferred for quality) and configure Anthropic as the fallback that activates when the gateway detects rate limits or timeouts from OpenAI. Monitor the fallback rate to detect primary provider issues early, before they impact your users. + +=== A/B testing models + +To compare model quality and cost without dual integration, route a percentage of traffic to different models. For example, you can send 80% of traffic to `claude-sonnet-4.5` and 20% to `claude-opus-4.6`, then compare quality metrics and costs in the observability dashboard before adjusting the split. + +=== Customer-based routing + +For SaaS products with tiered pricing (free, pro, enterprise), use CEL routing based on request headers to match users with appropriate models: + +[source,cel] +---- +request.headers["x-customer-tier"] == "enterprise" ? "anthropic/claude-opus-4.6" : +request.headers["x-customer-tier"] == "pro" ? "anthropic/claude-sonnet-4.5" : +"anthropic/claude-haiku" +---- + +== When to use AI Gateway + +AI Gateway is ideal for organizations that: + +* Use or plan to use multiple LLM providers +* Need centralized cost tracking and budgeting +* Want to experiment with different models without code changes +* Require high availability during provider outages +* Have multiple teams or customers using AI services +* Build AI agents that need MCP tool aggregation +* Need unified observability across all AI traffic + +AI Gateway may not be necessary if: + +* You only use a single provider with simple requirements +* You have minimal AI traffic (< 1000 requests/day) +* You don't need cost tracking or policy enforcement +* Your application doesn't require provider switching + +== Next steps + +Now that you understand what AI Gateway is and how it can benefit your organization: + +* xref:ai-gateway/gateway-quickstart.adoc[Gateway Quickstart] - Get started quickly with a basic gateway setup + +*For Administrators:* + +* xref:ai-gateway/admin/setup-guide.adoc[Setup Guide] - Enable providers, models, and create gateways +* xref:ai-gateway/gateway-architecture.adoc[Architecture Deep Dive] - Technical architecture details + +*For Builders:* + +* xref:ai-gateway/builders/discover-gateways.adoc[Discover Available Gateways] - Find which gateways you can access +* xref:ai-gateway/builders/connect-your-agent.adoc[Connect Your Agent] - Integrate your application diff --git a/modules/ai-agents/pages/index.adoc b/modules/ai-agents/pages/index.adoc index 9ac867a96..3ee31f5da 100644 --- a/modules/ai-agents/pages/index.adoc +++ b/modules/ai-agents/pages/index.adoc @@ -1,8 +1,4 @@ -= AI Agents in Redpanda Cloud -:description: Learn about AI agents and the tools Redpanda Cloud provides for building them. += Agentic AI +:description: Learn about the Redpanda Agentic Data Plane. Keep AI-powered apps highly available, control costs across providers, and govern access for teams, apps, and service accounts. :page-layout: index :page-aliases: develop:agents/about.adoc, develop:ai-agents/about.adoc - -AI agents are configurable assistants that autonomously perform specialist tasks by leveraging large language models (LLMs) and connecting to external data sources and tools. - -Redpanda Cloud provides two complementary Model Context Protocol (MCP) options to help you build AI agents. diff --git a/modules/ai-agents/pages/mcp/index.adoc b/modules/ai-agents/pages/mcp/index.adoc index ce382e290..b5903718e 100644 --- a/modules/ai-agents/pages/mcp/index.adoc +++ b/modules/ai-agents/pages/mcp/index.adoc @@ -1,8 +1,8 @@ = Model Context Protocol (MCP) -:description: Learn about the Model Context Protocol (MCP) in Redpanda Cloud. +:description: Give AI agents direct access to your databases, queues, CRMs, and other business systems without writing custom glue code. :page-layout: index -The Model Context Protocol (MCP) provides a standardized way for AI agents to connect with external data sources and tools in Redpanda Cloud. +AI agents need context from your business systems. The Model Context Protocol (MCP) translates agent intent into real connections to databases, queues, CRMs, HRIS, and other systems of record, without you writing custom integration code. Redpanda's MCP servers are built on the same proven connectors that power the world's largest e-commerce, electric vehicle, energy, and AI companies. Redpanda Cloud offers two complementary MCP options: diff --git a/modules/ai-agents/pages/mcp/local/index.adoc b/modules/ai-agents/pages/mcp/local/index.adoc index 7cb1cc55d..109411dbd 100644 --- a/modules/ai-agents/pages/mcp/local/index.adoc +++ b/modules/ai-agents/pages/mcp/local/index.adoc @@ -1,4 +1,4 @@ = Redpanda Cloud Management MCP Server :page-beta: true -:description: Find links to information about the Redpanda Cloud Management MCP Server and its features for building and managing AI agents that can interact with your Redpanda Cloud account and clusters. +:description: Manage your Redpanda Cloud clusters, topics, and users through AI agents using natural language commands. :page-layout: index diff --git a/modules/ai-agents/pages/mcp/local/overview.adoc b/modules/ai-agents/pages/mcp/local/overview.adoc index 01bfd6227..1ca5d69b6 100644 --- a/modules/ai-agents/pages/mcp/local/overview.adoc +++ b/modules/ai-agents/pages/mcp/local/overview.adoc @@ -1,6 +1,6 @@ = Redpanda Cloud Management MCP Server :page-beta: true -:description: Learn about the Redpanda Cloud Management MCP Server, which lets AI agents securely access and operate your Redpanda Cloud account and clusters. +:description: Let AI agents securely operate your Redpanda Cloud clusters, topics, and users through natural language commands. :page-topic-type: overview :personas: evaluator, ai_agent_developer, platform_admin // Reader journey: "I'm new" diff --git a/modules/ai-agents/pages/mcp/overview.adoc b/modules/ai-agents/pages/mcp/overview.adoc index 5b452c357..abb00c9b5 100644 --- a/modules/ai-agents/pages/mcp/overview.adoc +++ b/modules/ai-agents/pages/mcp/overview.adoc @@ -1,5 +1,5 @@ = MCP Servers for Redpanda Cloud Overview -:description: Learn about Model Context Protocol (MCP) in Redpanda Cloud, including the two complementary options: the Redpanda Cloud Management MCP Server and Remote MCP. +:description: Connect AI agents to your databases, queues, CRMs, and other business systems without writing glue code, using Redpanda's proven connectors. :page-topic-type: overview :personas: evaluator, ai_agent_developer // Reader journey: "I'm new" - understanding the landscape @@ -18,9 +18,9 @@ After reading this page, you will be able to: == What is MCP? -MCP (Model Context Protocol) is an open standard that lets AI agents use tools. Think of it like a universal adapter: instead of building custom integrations for every AI system, you define your tools once using MCP, and any MCP-compatible AI client can discover and use them. +MCP (Model Context Protocol) is an open standard that translates AI agent intent into real connections to databases, queues, CRMs, HRIS, accounting software, and other business systems. Instead of writing custom glue code for every integration, you define your tools once using MCP, and any MCP-compatible AI client can discover and use them. -Without MCP, connecting AI to your business systems requires custom API code, authentication handling, and response formatting for each AI platform. With MCP, you describe what a tool does and what inputs it needs, and the protocol handles the rest. +Without MCP, connecting AI to your business systems requires custom API code, authentication handling, and response formatting for each AI platform. With MCP, you describe what a tool does and what inputs it needs, and the protocol handles the rest. Redpanda's MCP servers are built on the same proven connectors that power the world's largest e-commerce, electric vehicle, energy, and AI companies today. == MCP options in Redpanda Cloud diff --git a/modules/ai-agents/pages/mcp/remote/concepts.adoc b/modules/ai-agents/pages/mcp/remote/concepts.adoc index 16e78912c..808db253c 100644 --- a/modules/ai-agents/pages/mcp/remote/concepts.adoc +++ b/modules/ai-agents/pages/mcp/remote/concepts.adoc @@ -1,5 +1,5 @@ = MCP Tool Execution and Components -:description: Understand the MCP execution model, choose the right component type, and use traces for observability. +:description: Understand how MCP tools execute requests, choose the right Redpanda Connect component type, and use traces for observability. :page-aliases: ai-agents:mcp/remote/understanding-mcp-tools.adoc :page-topic-type: concepts :personas: ai_agent_developer, streaming_developer @@ -64,7 +64,8 @@ The `redpanda.otel_traces` topic has a predefined retention policy. Configuratio The topic persists in your cluster even after all MCP servers are deleted, allowing you to retain historical trace data for analysis. -Trace data may contain sensitive information from your tool inputs and outputs. Consider implementing appropriate glossterm:ACL[,access control lists (ACLs)] for the `redpanda.otel_traces` topic, and review the data in traces before sharing or exporting to external systems. +Trace data may contain sensitive information from your +tool inputs and outputs. Consider implementing appropriate glossterm:ACL[,access control lists (ACLs)] for the `redpanda.otel_traces` topic, and review the data in traces before sharing or exporting to external systems. === Understand the trace structure diff --git a/modules/ai-agents/pages/mcp/remote/index.adoc b/modules/ai-agents/pages/mcp/remote/index.adoc index 1c77473f2..2233299c5 100644 --- a/modules/ai-agents/pages/mcp/remote/index.adoc +++ b/modules/ai-agents/pages/mcp/remote/index.adoc @@ -1,3 +1,3 @@ = Remote MCP Servers for Redpanda Cloud -:description: Enable AI agents to directly interact with your Redpanda Cloud clusters and streaming data. +:description: Build MCP tools that connect AI agents to databases, queues, CRMs, and other business systems using Redpanda's proven connectors. :page-layout: index diff --git a/modules/ai-agents/pages/mcp/remote/overview.adoc b/modules/ai-agents/pages/mcp/remote/overview.adoc index bc3d11845..8a1780dd8 100644 --- a/modules/ai-agents/pages/mcp/remote/overview.adoc +++ b/modules/ai-agents/pages/mcp/remote/overview.adoc @@ -1,5 +1,5 @@ = Remote MCP Server Overview -:description: Discover how AI agents can interact with your streaming data and how to connect them to Redpanda Cloud. +:description: Build and host MCP tools that connect AI agents to your business systems without writing glue code, using Redpanda's proven connectors. :page-topic-type: overview :personas: evaluator, ai_agent_developer // Reader journey: "I'm evaluating this" @@ -8,7 +8,7 @@ :learning-objective-2: Identify use cases where Remote MCP provides business value :learning-objective-3: Describe how MCP tools expose Redpanda Connect components to AI -This page introduces Remote MCP servers and helps you decide if they're right for your use case. +Remote MCP lets you give AI agents access to your databases, queues, CRMs, and other systems of record without writing custom integration code. This page introduces Remote MCP servers and helps you decide if they're right for your use case. After reading this page, you will be able to: diff --git a/modules/ai-agents/pages/mcp/remote/quickstart.adoc b/modules/ai-agents/pages/mcp/remote/quickstart.adoc index a778df103..4c4af410b 100644 --- a/modules/ai-agents/pages/mcp/remote/quickstart.adoc +++ b/modules/ai-agents/pages/mcp/remote/quickstart.adoc @@ -1,5 +1,5 @@ = Remote MCP Server Quickstart -:description: Learn how to extend AI agents with custom tools that interact with your Redpanda data using the Model Context Protocol (MCP). +:description: Build and deploy your first MCP tools to connect AI agents to your Redpanda data without writing custom integration code. :page-topic-type: tutorial :personas: ai_agent_developer, streaming_developer, evaluator // Reader journey: "I want to try it now" diff --git a/modules/ai-agents/pages/mcp/remote/tool-patterns.adoc b/modules/ai-agents/pages/mcp/remote/tool-patterns.adoc index 1348419f1..b3e071c45 100644 --- a/modules/ai-agents/pages/mcp/remote/tool-patterns.adoc +++ b/modules/ai-agents/pages/mcp/remote/tool-patterns.adoc @@ -137,7 +137,7 @@ See also: xref:develop:connect/components/processors/gcp_bigquery_select.adoc[`g ---- openai_chat_completion: api_key: "${secrets.OPENAI_API_KEY}" - model: "gpt-4" + model: "gpt-5.2" prompt: | Analyze this customer feedback and provide: 1. Sentiment (positive/negative/neutral) diff --git a/modules/ai-agents/partials/ai-gateway-byoc-note.adoc b/modules/ai-agents/partials/ai-gateway-byoc-note.adoc new file mode 100644 index 000000000..86fdf86a6 --- /dev/null +++ b/modules/ai-agents/partials/ai-gateway-byoc-note.adoc @@ -0,0 +1 @@ +NOTE: The Agentic Data Plane is supported on BYOC clusters running with AWS and Redpanda version 25.3 and later. diff --git a/modules/ai-agents/partials/ai-hub-mode-indicator.adoc b/modules/ai-agents/partials/ai-hub-mode-indicator.adoc new file mode 100644 index 000000000..b94581098 --- /dev/null +++ b/modules/ai-agents/partials/ai-hub-mode-indicator.adoc @@ -0,0 +1,43 @@ +[tabs] +==== +Redpanda Cloud Console:: ++ +In the Redpanda Cloud Console: ++ +. Navigate to *AI Gateway* → *Gateways*. +. Select a gateway from your list. +. Look for the *Mode* indicator in the gateway details: ++ +-- +* *AI Hub*: Pre-configured with intelligent routing +* *Custom*: Administrator-configured routing +-- ++ +The mode badge appears prominently in the gateway overview and helps you quickly identify which configuration approach is in use. + +Gateway API:: ++ +Query a gateway's configuration to see its mode: ++ +[,bash] +---- +curl https://api.redpanda.com/v1/gateways/${GATEWAY_ID} \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" +---- ++ +Response: ++ +[,json] +---- +{ + "id": "gw_abc123", + "name": "production-gateway", + "mode": "ai_hub", // or "custom" + "endpoint": "https://gw.ai.panda.com", + "workspace_id": "ws_xyz789", + "created_at": "2025-01-15T10:30:00Z" +} +---- ++ +The `mode` field indicates whether the gateway uses `ai_hub` (pre-configured) or `custom` (user-configured) mode. +==== diff --git a/modules/ai-agents/partials/ai-hub-preference-toggles.adoc b/modules/ai-agents/partials/ai-hub-preference-toggles.adoc new file mode 100644 index 000000000..fe41d7122 --- /dev/null +++ b/modules/ai-agents/partials/ai-hub-preference-toggles.adoc @@ -0,0 +1,66 @@ +AI Hub mode provides 6 user-configurable preference toggles that influence routing behavior without modifying the underlying rules. + +[cols="2,1,3,3",options="header"] +|=== +|Preference |Default |Purpose |When to Enable + +|*infer_provider_from_model_name* +|`true` +|Infer provider from model name patterns when no prefix specified +|Enabled by default. When true, `gpt-5.2` routes to OpenAI, `claude-sonnet-4.5` routes to Anthropic without requiring `openai/` or `anthropic/` prefixes. Disable to require explicit vendor prefixes for all requests. + +|*auto_route_vision* +|`false` +|Automatically route requests containing images or multimodal content to vision-capable models +|Enable when your applications use vision capabilities and you want AI Hub to automatically select vision-capable models. When disabled, you must explicitly specify a vision-capable model in your request. + +|*auto_route_long_context* +|`false` +|Automatically route requests with >100K tokens to Anthropic models +|Enable when your applications process large documents or long conversations. Anthropic models support longer context windows (up to 200K tokens). When disabled, requests use the model you specify regardless of context length. Configure `long_context_threshold_tokens` to adjust the token threshold (default: 100,000). + +|*rate_limit_resilience* +|`false` +|Automatically failover to alternate providers when receiving 429 (rate limit) errors +|Enable when you want higher availability during rate limit conditions. The gateway temporarily routes to an alternate provider when the primary provider returns 429 errors. Configure `rate_limit_cooldown_seconds` to adjust the cooldown period (default: 60 seconds). When disabled, 429 errors are returned to your application. + +|*cost_optimization* +|`"none"` +|Cost optimization strategy: `"none"`, `"prefer_cheaper"`, or `"prefer_quality"` +|Set to `"prefer_cheaper"` to route requests to cost-effective models when multiple providers can handle the request. Set to `"prefer_quality"` to prefer higher-quality models. AI Hub uses LLM-as-a-Judge to analyze prompt complexity when `"prefer_cheaper"` is enabled. Default `"none"` routes based on model specified without cost optimization. + +|*fallback_provider* +|`"openai"` +|Fallback provider when no routing rule matches: `"openai"`, `"anthropic"`, or `"none"` +|Set the default provider when model inference fails and no explicit provider prefix is given. `"openai"` routes unmatched requests to OpenAI (default). `"anthropic"` routes to Anthropic. `"none"` returns 400 error instead of guessing. Most organizations use `"openai"` as the default fallback. +|=== + +=== How preferences interact with routing + +Preferences work in combination with AI Hub's immutable routing rules: + +. *Routing rules evaluate first* (model prefix, pattern matching, special routing for embeddings/images/audio) +. *Preferences influence decisions* when multiple valid options exist +. *Protected routing rules cannot be overridden* by preferences (for example, embeddings always go to OpenAI) + +For example, if you specify `model: "openai/gpt-5.2"`, the model prefix rule routes to OpenAI regardless of preference toggles. But if you specify `model: "gpt-5.2"` without a prefix and `infer_provider_from_model_name` is enabled, AI Hub routes to OpenAI based on the `gpt-*` pattern. + +=== Best practices + +* *Start with defaults*: Most organizations use default settings initially and enable preferences based on observed usage patterns +* *Test inference before disabling*: The `infer_provider_from_model_name` preference is enabled by default because it provides the best user experience. Only disable if you want to enforce explicit vendor prefixes. +* *Monitor before enabling auto-routing*: Review your request patterns in the observability dashboard before enabling `auto_route_vision` or `auto_route_long_context` +* *Test failover behavior*: Enable `rate_limit_resilience` in staging first to understand failover behavior before enabling in production +* *Cost vs quality trade-offs*: `cost_optimization` set to `"prefer_cheaper"` prioritizes cost savings, which may impact response quality for complex requests +* *Document your choices*: Record why each preference is enabled to help future administrators understand your configuration + +=== Configuring preferences + +ifdef::ai-hub-available[] +Platform administrators configure preferences when creating or updating an AI Hub gateway. For detailed configuration instructions, see xref:ai-gateway/admin/configure-ai-hub.adoc[]. +endif::[] +ifndef::ai-hub-available[] +Platform administrators configure preferences when creating or updating an AI Hub gateway. +endif::[] + +Builders cannot modify preferences directly. If you need different routing behavior, contact your administrator to adjust preference toggles or request ejection to Custom mode for full control. diff --git a/modules/ai-agents/partials/ai-hub/configure-ai-hub.adoc b/modules/ai-agents/partials/ai-hub/configure-ai-hub.adoc new file mode 100644 index 000000000..18a6cada1 --- /dev/null +++ b/modules/ai-agents/partials/ai-hub/configure-ai-hub.adoc @@ -0,0 +1,436 @@ += Configure AI Hub Gateway +:description: Create and configure zero-config AI Hub gateways with pre-configured routing and backend pools. +:page-topic-type: how-to +:personas: platform_admin +:learning-objective-1: Create an AI Hub gateway with one-click configuration +:learning-objective-2: Configure user preference toggles to customize routing behavior +:learning-objective-3: Manage provider credentials for OpenAI, Anthropic, and Google Gemini + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +AI Hub mode provides instant, pre-configured access to OpenAI, Anthropic, and Google Gemini with zero setup complexity. Platform admins add provider credentials, and all teams immediately benefit from intelligent routing. + +This guide walks administrators through creating and configuring AI Hub gateways, from initial setup to managing preferences and credentials. + +After reading this page, you will be able to: + +* [ ] Create an AI Hub gateway with one-click configuration +* [ ] Configure user preference toggles to customize routing behavior +* [ ] Manage provider credentials for OpenAI, Anthropic, and Google Gemini + +== Prerequisites + +* Access to the Redpanda Cloud Console with administrator privileges +* API keys for at least one LLM provider: +** OpenAI: API key from https://platform.openai.com/api-keys +** Anthropic: API key from https://console.anthropic.com/settings/keys +* A Redpanda Cloud workspace + +== Create an AI Hub gateway + +Creating an AI Hub gateway is significantly simpler than creating a Custom mode gateway because all routing rules and backend pools are pre-configured. + +. In the Redpanda Cloud Console, navigate to *AI Gateway* → *Gateways*. +. Click *Create Gateway*. +. Select *AI Hub* as the gateway mode. +. Configure basic settings: ++ +-- +* *Name*: Choose a descriptive name (for example, `ai-hub-production`, `team-ml-hub`) +* *Workspace*: Select the workspace this gateway belongs to +* *Description* (optional): Add context about this gateway's purpose +-- ++ +. Click *Create*. + +After creation, the gateway is immediately available with all pre-configured components: + +* 6 backend pools (OpenAI, Anthropic, and Google Gemini) +* 17 routing rules +* Intelligent automatic routing +* Default preference toggles + +Note the following information from the gateway detail page: + +* *Gateway Endpoint*: URL for API requests, with the gateway ID embedded in the path (for example, `https://example/gateways/gw_abc123/v1`) + +Share the gateway endpoint with teams who need to access this gateway. + +== Understanding pre-configured architecture + +When you create an AI Hub gateway, you get a complete, production-ready configuration without manual setup. + +=== Backend pools + +AI Hub mode automatically provisions 6 backend pools to handle different request patterns: + +*OpenAI Pools:* + +. *OpenAI Standard*: Handles standard (non-streaming) requests to OpenAI models ++ +-- +* Target: `https://api.openai.com` +* Authentication: Bearer token +* Timeout: Standard (60 seconds) +* Models: All `openai/*` models, embeddings, images, audio +-- + +. *OpenAI Streaming*: Handles streaming requests to OpenAI models ++ +-- +* Target: `https://api.openai.com` +* Authentication: Bearer token +* Timeout: Extended (300 seconds) +* Models: All `openai/*` models with streaming enabled +-- + +*Anthropic Pools:* + +. *Anthropic with Transform (Standard)*: Converts OpenAI format to Anthropic's native format for standard requests ++ +-- +* Target: `https://api.anthropic.com` +* Authentication: x-api-key header +* Transform: OpenAI → Anthropic Messages API +* Timeout: Standard (60 seconds) +* Models: All `anthropic/*` models via OpenAI-compatible endpoint +-- + +. *Anthropic with Transform (Streaming)*: Converts OpenAI format to Anthropic's native format for streaming requests ++ +-- +* Target: `https://api.anthropic.com` +* Authentication: x-api-key header +* Transform: OpenAI → Anthropic Messages API +* Timeout: Extended (300 seconds) +* Models: All `anthropic/*` models with streaming +-- + +. *Anthropic Native (Standard)*: Direct passthrough for native Anthropic SDK requests ++ +-- +* Target: `https://api.anthropic.com` +* Authentication: x-api-key header +* Transform: None (passthrough) +* Timeout: Standard (60 seconds) +* Endpoint: `/v1/messages` (Anthropic's native API) +-- + +. *Anthropic Native (Streaming)*: Direct passthrough for native Anthropic SDK streaming requests ++ +-- +* Target: `https://api.anthropic.com` +* Authentication: x-api-key header +* Transform: None (passthrough) +* Timeout: Extended (300 seconds) +* Endpoint: `/v1/messages` with streaming +-- + +These backend pools are immutable and cannot be modified or deleted in AI Hub mode. + +=== Routing rules + +AI Hub mode provides 17 routing rules organized across 5 priority tiers. These rules automatically direct requests to the appropriate backend pool based on request characteristics: + +*Tier 1: Model Prefix Routing* (Highest Priority) + +* `openai/*` → OpenAI backend pools +* `anthropic/*` → Anthropic backend pools (with transform) + +*Tier 2: Model Name Pattern Routing* + +* `gpt-*` → OpenAI backend pools +* `claude-*` → Anthropic backend pools +* `o1-*` → OpenAI backend pools + +*Tier 3: Special Purpose Routing* + +* Embeddings requests → OpenAI only (Anthropic doesn't support embeddings) +* Image generation → OpenAI only (DALL-E) +* Audio/speech requests → OpenAI only (Whisper, TTS) +* Content moderation → OpenAI only +* Legacy completions API → OpenAI only + +*Tier 4: Native SDK Detection* + +* Requests to `/v1/messages` → Anthropic Native backend pools (no transform) +* Requests to `/v1/chat/completions` → Transform backend pools + +*Tier 5: Streaming Detection* + +* Requests with `stream: true` → Streaming backend pools (extended timeout) +* Requests without streaming → Standard backend pools + +These routing rules are immutable and managed by Redpanda. They ensure consistent, tested behavior across all AI Hub gateways. + +=== Intelligent automatic routing + +The routing engine evaluates rules in priority order: + +. Check model prefix (`openai/*`, `anthropic/*`) +. If no prefix, check model name pattern (`gpt-*`, `claude-*`) +. Check for special request types (embeddings, images) +. Detect native SDK usage (`/v1/messages`) +. Detect streaming requirements +. Apply user preference toggles + +This multi-tier evaluation ensures requests always reach the correct provider and backend pool. + +== Configure user preferences + +While routing rules are immutable, you can customize routing behavior through user preference toggles. + +include::ai-agents:partial$ai-hub-preference-toggles.adoc[] + +=== Set preferences via Console + +. Navigate to your AI Hub gateway. +. Click *Settings* → *Preferences*. +. Toggle preferences as needed: ++ +-- +* Enable `auto_route_vision` if your teams use image analysis +* Enable `auto_route_long_context` if you process large documents +* Enable `rate_limit_resilience` for higher availability +-- ++ +. Click *Save Changes*. + +Changes take effect immediately for new requests. + +=== Set preferences via API + +[,bash] +---- +curl https://api.redpanda.com/v1/gateways/${GATEWAY_ID}/ai-hub/preferences \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + -H "Content-Type: application/json" \ + -X PATCH \ + -d '{ + "preferences": { + "infer_provider_from_model_name": true, + "auto_route_vision": false, + "auto_route_long_context": false, + "long_context_threshold_tokens": 100000, + "rate_limit_resilience": false, + "rate_limit_cooldown_seconds": 60, + "cost_optimization": "none", + "fallback_provider": "openai" + } + }' +---- + +== Manage provider credentials + +AI Hub gateways require provider credentials to route requests. Credentials are stored encrypted and shared across all gateways in your workspace. + +=== Add OpenAI credentials + +. Navigate to *Settings* → *Providers*. +. Select *OpenAI*. +. Click *Configure* (or *Edit* if already configured). +. Enter your OpenAI API Key: ++ +-- +* Obtain from: https://platform.openai.com/api-keys +* Format: `sk-...` (starts with `sk-`) +-- ++ +. Click *Save*. + +All AI Hub gateways in the workspace can now route to OpenAI. + +=== Add Anthropic credentials + +. Navigate to *Settings* → *Providers*. +. Select *Anthropic*. +. Click *Configure* (or *Edit* if already configured). +. Enter your Anthropic API Key: ++ +-- +* Obtain from: https://console.anthropic.com/settings/keys +* Format: `sk-ant-...` (starts with `sk-ant-`) +-- ++ +. Click *Save*. + +All AI Hub gateways in the workspace can now route to Anthropic. + +=== Credential rotation + +To rotate credentials without downtime: + +. Add a new API key to the provider configuration (don't delete the old one yet). +. Wait for the new key to propagate (approximately 5 minutes). +. Test with a sample request to verify the new key works. +. Delete the old API key. + +AI Gateway automatically load-balances across multiple API keys if you configure more than one per provider. + +=== Verify credentials + +Test that credentials are working: + +[,bash] +---- +curl ${GATEWAY_ENDPOINT}/models \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" +---- + +Expected response: List of available models from configured providers. + +If you see authentication errors, verify that: + +* Provider credentials are correctly entered +* API keys have not expired +* Provider accounts have sufficient credits + +== Protected resources + +In AI Hub mode, certain resources are protected to ensure reliability and consistency. + +*Cannot be modified or deleted:* + +* Backend pool definitions (6 pools) +* Core routing rules (17 rules) +* Failover logic +* Provider selection algorithms + +*Can be configured:* + +* Provider credentials +* User preference toggles (6 available) +* Rate limits (per-gateway, per-user) +* Spend limits (monthly budgets) + +If you attempt to modify protected resources through the API, you will receive an error indicating the resource is managed by AI Hub and cannot be modified directly. + +=== Why resources are protected + +Protected resources ensure that: + +* Routing behavior is consistent across all AI Hub gateways +* Security updates and improvements are automatically applied +* Provider integrations remain compatible with new model releases +* Support teams can diagnose issues without custom configurations + +=== How to gain control + +If you need to modify backend pools or routing rules, eject the gateway to Custom mode. See xref:ai-agents:ai-gateway/admin/eject-to-custom-mode.adoc[] for details. + +== Monitor usage + +AI Hub gateways provide the same observability features as Custom mode gateways. + +=== Observability dashboard + +. Navigate to *AI Gateway* → *Gateways* → Your Gateway. +. Click *Observability*. + +View metrics including: + +* Request volume per model and provider +* Token usage (prompt and completion tokens) +* Estimated spend per model +* Latency metrics (p50, p95, p99) +* Error rates and types +* Success rate per provider + +=== Cost tracking + +AI Hub gateways automatically track costs across both providers: + +* OpenAI costs: Based on official OpenAI pricing +* Anthropic costs: Based on official Anthropic pricing + +View cost estimates in: + +* Real-time dashboard (current day) +* Historical reports (daily, weekly, monthly) +* Cost breakdown by model, provider, team + +Set spend limits to control costs: + +. Navigate to *Settings* → *Spend Limits*. +. Configure monthly budget (for example, $5,000/month). +. Set alerting thresholds (for example, alert at 80% of budget). + +== Troubleshooting + +=== Provider authentication errors + +*Symptom*: Requests fail with 401 Unauthorized errors + +*Causes and solutions*: + +* Invalid API key: Verify key is correct in provider configuration +* Expired API key: Generate a new key from provider console +* Insufficient credits: Check provider account balance + +*Test authentication*: + +[,bash] +---- +curl ${GATEWAY_ENDPOINT}/chat/completions \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + -H "Content-Type: application/json" \ + -d '{ + "model": "openai/gpt-5.2-mini", + "messages": [{"role": "user", "content": "test"}], + "max_tokens": 5 + }' +---- + +=== Requests routing to unexpected provider + +*Symptom*: Requests go to OpenAI when you expected Anthropic (or vice versa) + +*Common causes*: + +* Model prefix missing: Use `anthropic/claude-sonnet-4.5` not `claude-sonnet-4.5` +* Provider preference override: Check preference toggles +* Special routing rule: Embeddings always route to OpenAI + +*Debug routing*: + +. Check observability dashboard for actual routing +. Review model string in request +. Verify preference toggles are set as expected + +=== High latency + +*Symptom*: Requests take longer than expected + +*Common causes*: + +* Large responses: Requests generating many tokens take longer +* Provider latency: Check provider status pages +* Network issues: Test connectivity to gateway endpoint + +*Optimization strategies*: + +* Enable streaming for faster time-to-first-token +* Use smaller models for simple requests (enable `cost_optimization` when available) +* Set appropriate `max_tokens` limits + +== When to eject to Custom mode + +Consider ejecting to Custom mode when: + +* You need custom routing rules not covered by AI Hub's 17 rules +* You want to modify backend pool configuration (timeouts, retries) +* You need to integrate with providers not supported by AI Hub (Azure OpenAI, AWS Bedrock) +* You want provider-specific features not available through the unified API +* Your requirements have grown beyond AI Hub's pre-configured capabilities + +Most organizations start with AI Hub and eject only when they outgrow its capabilities. + +For ejection instructions, see xref:ai-agents:ai-gateway/admin/eject-to-custom-mode.adoc[]. + +== Next steps + +Now that you've configured your AI Hub gateway: + +* xref:ai-agents:ai-gateway/builders/use-ai-hub-gateway.adoc[Share this guide with builders] - Help your teams connect to the gateway +* xref:ai-agents:ai-gateway/admin/eject-to-custom-mode.adoc[Learn about ejecting to Custom mode] - Understand the transition path if you need more control +* xref:ai-agents:ai-gateway/gateway-architecture.adoc[Deep dive into architecture] - Understand how AI Hub routing works diff --git a/modules/ai-agents/partials/ai-hub/eject-to-custom-mode.adoc b/modules/ai-agents/partials/ai-hub/eject-to-custom-mode.adoc new file mode 100644 index 000000000..57b7c1457 --- /dev/null +++ b/modules/ai-agents/partials/ai-hub/eject-to-custom-mode.adoc @@ -0,0 +1,400 @@ += Eject to Custom Mode +:description: Transition an AI Hub gateway to Custom mode for full configuration control. +:page-topic-type: how-to +:personas: platform_admin +:learning-objective-1: Evaluate the implications of ejecting from AI Hub to Custom mode +:learning-objective-2: Prepare for ejection by documenting current configuration +:learning-objective-3: Execute the ejection process and configure the gateway post-ejection + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +Ejecting a gateway from AI Hub mode to Custom mode is a one-way transition that gives you full control over all routing rules, backend pools, and policies. After ejection, the gateway behaves exactly like a Custom mode gateway. + +This guide walks administrators through the ejection process, from preparation to post-ejection configuration. + +After reading this page, you will be able to: + +* [ ] Evaluate the implications of ejecting from AI Hub to Custom mode +* [ ] Prepare for ejection by documenting current configuration +* [ ] Execute the ejection process and configure the gateway post-ejection + +== What ejection means + +Ejecting an AI Hub gateway to Custom mode is a significant configuration change that fundamentally alters how the gateway operates. + +=== Changes that occur + +When you eject an AI Hub gateway: + +* *Gateway mode changes*: Redpanda changes `gateway.mode` from `ai_hub` to `custom` +* *Resources become editable*: The gateway sets all `ai_hub_managed` metadata to `false`, making previously protected resources editable and deletable +* *Backend pools unlocked*: You can now modify, delete, or add backend pools +* *Routing rules unlocked*: You can now modify, delete, or add routing rules +* *Preferences removed*: The 6 preference toggles are deleted (you configure routing directly instead) +* *Version updates stop*: No more automatic AI Hub version updates +* *Full control*: Complete flexibility over configuration + +=== What stays the same + +Ejection preserves: + +* *Gateway ID*: Unchanged - applications continue using the same gateway endpoint +* *Gateway endpoint*: Unchanged - no URL changes required +* *Provider credentials*: Retained and continue working +* *Rate limits and spend limits*: Preserved as configured +* *Observability history*: All historical metrics and logs remain + +=== Cannot be undone + +[WARNING] +==== +Ejection is a one-way transition. You cannot revert a Custom mode gateway back to AI Hub mode. + +To get back to AI Hub mode, you must: + +. Create a new AI Hub gateway +. Migrate your applications to use the new gateway ID and endpoint +. Delete the ejected Custom mode gateway +==== + +== When to eject + +Consider ejecting when your requirements outgrow AI Hub's pre-configured capabilities. + +=== Use cases requiring Custom mode + +*Custom routing logic:* + +* Route based on customer tier, geography, or feature flags +* Implement complex fallback chains beyond simple provider failover +* Apply conditional routing based on request metadata + +*Custom backend pools:* + +* Modify timeouts or retry behavior +* Integrate with Azure OpenAI or AWS Bedrock +* Add custom headers or authentication patterns +* Configure custom health check logic + +*Provider-specific features:* + +* Use Anthropic's native `/v1/messages` API with custom parameters +* Access OpenAI features not exposed through the unified API +* Integrate with self-hosted or custom LLM providers + +*Specialized requirements:* + +* Multi-region routing with geography-based selection +* A/B testing with granular traffic splitting +* Custom cost allocation logic +* Compliance requirements for request/response handling + +=== Decision checklist + +Before ejecting, verify: + +* [ ] AI Hub's 17 routing rules cannot accommodate your requirements +* [ ] The 6 user preference toggles provide insufficient control +* [ ] You've reviewed xref:ai-agents:ai-gateway/gateway-modes.adoc[] to understand alternatives +* [ ] Your team can maintain custom routing rules and backend pools +* [ ] You've documented the business justification for ejection + +== Pre-ejection preparation + +Proper preparation ensures a smooth transition and helps you quickly configure the gateway post-ejection. + +=== Document current configuration + +Export your AI Hub configuration before ejecting: + +[,bash] +---- +#!/bin/bash + +export GATEWAY_ID="gw_abc123" +export REDPANDA_CLOUD_TOKEN="your-token" + +# 1. Export gateway details +curl https://api.redpanda.com/v1/gateways/${GATEWAY_ID} \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + > ai-hub-config-$(date +%Y%m%d).json + +# 2. Export current preferences +curl https://api.redpanda.com/v1/gateways/${GATEWAY_ID}/ai-hub/preferences \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + > ai-hub-preferences-$(date +%Y%m%d).json + +# 3. Export backend pools (read-only in AI Hub, but good to document) +curl https://api.redpanda.com/v1/gateways/${GATEWAY_ID}/backend-pools \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + > ai-hub-backend-pools-$(date +%Y%m%d).json + +# 4. Export routing rules (read-only in AI Hub, but good to document) +curl https://api.redpanda.com/v1/gateways/${GATEWAY_ID}/routing-rules \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + > ai-hub-routing-rules-$(date +%Y%m%d).json +---- + +Store these files securely. You'll reference them when configuring Custom mode routing rules. + +=== Plan custom configuration + +Define your post-ejection configuration: + +. *Routing rules*: Write CEL expressions that replicate AI Hub behavior, then add your custom rules +. *Backend pools*: Identify modifications needed (timeouts, custom providers, etc.) +. *Testing strategy*: Plan how you'll validate that existing functionality still works +. *Rollout approach*: Decide whether to eject immediately or test in staging first + +=== Notify users + +Communicate the upcoming change to teams using the gateway: + +[,text] +---- +Subject: [Action Required] AI Hub Gateway Ejection - [Gateway Name] + +Hi team, + +On [DATE], we will eject the AI Hub gateway "[GATEWAY_NAME]" (ID: gw_abc123) to Custom mode. This transition gives us greater flexibility for [BUSINESS JUSTIFICATION]. + +What you need to know: +- Gateway ID and endpoint will NOT change +- Your applications will continue working without code changes +- Routing behavior will remain the same initially +- After ejection, we will implement [PLANNED CHANGES] + +Timeline: +- [DATE]: Ejection scheduled +- [DATE]: Testing and validation +- [DATE]: Custom routing rules deployed + +No action required on your part. We will notify you if any changes affect your integration. + +Questions? Contact [ADMIN EMAIL] +---- + +=== Communication template for builders + +Provide builders with clear expectations: + +* Gateway ID and endpoint remain unchanged +* API behavior stays the same initially +* Custom routing rules will be implemented gradually +* Migration to a new gateway is not required + +== Ejection process + +The ejection process is irreversible. Follow these steps carefully. + +=== Step 1: Initiate ejection + +. Navigate to your gateway in the console. +. Click *Settings*. +. Click *Eject to Custom Mode* button. + +=== Step 2: Confirm understanding + +The console presents warnings about ejection: + +* [ ] I understand this is a one-way transition and cannot be undone +* [ ] I understand that backend pools and routing rules will become editable +* [ ] I understand that AI Hub version updates will stop +* [ ] I understand that preference toggles will be removed +* [ ] I have documented the current configuration +* [ ] I have notified users about this change + +Check all boxes to proceed. + +=== Step 3: Execute ejection + +. Enter the gateway name to confirm: `[Your Gateway Name]` +. Click *Eject to Custom Mode*. + +Ejection typically completes in seconds. The gateway remains available during the transition. + +You can also eject via API: + +[,bash] +---- +curl -X POST https://api.redpanda.com/v1/gateways/${GATEWAY_ID}/eject \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" +---- + +Expected response: + +[,json] +---- +{ + "previous_mode": "ai_hub", + "new_mode": "custom", + "resources_unlocked": 23 +} +---- + +=== Step 4: Verify ejection + +After ejection completes: + +. Verify gateway mode shows as *Custom* +. Check that backend pools are now editable +. Check that routing rules are now editable +. Verify AI Hub preferences section no longer appears + +Test with a sample request: + +[,bash] +---- +curl ${GATEWAY_ENDPOINT}/chat/completions \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + -H "Content-Type: application/json" \ + -d '{ + "model": "openai/gpt-5.2-mini", + "messages": [{"role": "user", "content": "test"}], + "max_tokens": 5 + }' +---- + +Expected: Request succeeds, proving ejection didn't break basic functionality. + +== Post-ejection configuration + +After ejection, configure the gateway to meet your custom requirements. + +=== Review unlocked resources + +The ejection preserves all backend pools and routing rules from AI Hub mode, but they're now editable: + +* *6 backend pools*: OpenAI (standard, streaming), Anthropic (transform standard/streaming, native standard/streaming) +* *17 routing rules*: All AI Hub rules are preserved but can now be modified + +=== Replicate preference toggle behavior + +If you relied on AI Hub preference toggles, replicate their behavior with custom routing rules. + +*Example: Replicate auto_route_vision* + +[,cel] +---- +// Route vision requests to GPT-5.2 (vision-capable) +request.headers["content-type"].contains("multipart") ? "openai/gpt-5.2" : request.model +---- + +*Example: Replicate auto_route_long_context* + +[,cel] +---- +// Route large prompts to Claude Opus (200K context) +request.prompt.size() > 100000 ? "anthropic/claude-opus-4.6" : request.model +---- + +*Example: Replicate fallback_provider (Anthropic)* + +[,cel] +---- +// Route to Anthropic when model cannot be inferred +!request.model.contains("/") && !request.model.startsWith("gpt-") && !request.model.startsWith("claude-") + ? "anthropic/claude-sonnet-4.5" + : request.model +---- + +For CEL routing patterns, see xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[]. + +=== Implement custom routing rules + +Now add your custom routing logic: + +. Navigate to *Routing Rules*. +. Click *Add Routing Rule*. +. Write CEL expressions for your custom routing. +. Set priority to control rule evaluation order. +. Test with sample requests. + +=== Optimize backend pools + +Modify backend pools to meet your requirements: + +* Adjust timeouts for long-running requests +* Add custom headers for authentication +* Configure health check intervals +* Add new backend pools for additional providers + +=== Test thoroughly + +Before deploying custom changes to production: + +. Test all existing request patterns +. Verify routing behavior matches expectations +. Check observability dashboard for errors +. Monitor latency and success rates + +== Impact on builders + +Ejection has minimal impact on builder applications. + +=== What changes + +*For builders:* + +* Gateway mode visible as "Custom" in Console +* Can now see and understand routing rules +* May benefit from custom routing logic added by admins + +=== What stays the same + +*For builders:* + +* Gateway ID unchanged +* Gateway endpoint unchanged +* API contracts unchanged +* Authentication unchanged +* Observability unchanged + +=== Communication + +Use this template to inform builders: + +[,text] +---- +Subject: [Completed] AI Hub Gateway Ejected to Custom Mode - [Gateway Name] + +Hi team, + +We've successfully ejected the "[GATEWAY_NAME]" gateway from AI Hub to Custom mode. This change gives us greater flexibility for [BUSINESS JUSTIFICATION]. + +What this means for you: +- ✅ No code changes required +- ✅ Gateway ID (gw_abc123) remains the same +- ✅ Gateway endpoint remains the same +- ✅ Your applications continue working as before +- ℹ️ Gateway mode now shows as "Custom" in the Console +- ℹ️ We can now implement custom routing logic if needed + +Next steps: +- [DATE]: We will implement [PLANNED CUSTOM FEATURES] +- [DATE]: Performance optimization based on usage patterns + +Questions? Contact [ADMIN EMAIL] +---- + +== Cannot be undone + +If you later decide you want AI Hub mode again, you cannot revert this gateway. + +=== Alternative: Create new AI Hub gateway + +To get back to AI Hub mode: + +. Create a new AI Hub gateway +. Share new gateway ID and endpoint with teams +. Migrate applications to use the new gateway endpoint +. Monitor traffic shift in observability dashboard +. Delete old Custom mode gateway when traffic reaches zero + +== Next steps + +Now that you've ejected to Custom mode: + +* xref:ai-agents:ai-gateway/admin/setup-guide.adoc[Complete Custom mode configuration] - Configure routing rules and backend pools +* xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[Learn CEL routing patterns] - Write powerful routing expressions +* xref:ai-agents:ai-gateway/gateway-architecture.adoc[Understand architecture] - Deep dive into Custom mode architecture diff --git a/modules/ai-agents/partials/ai-hub/gateway-modes.adoc b/modules/ai-agents/partials/ai-hub/gateway-modes.adoc new file mode 100644 index 000000000..a24e3ebcd --- /dev/null +++ b/modules/ai-agents/partials/ai-hub/gateway-modes.adoc @@ -0,0 +1,274 @@ += AI Gateway Modes +:description: Understand AI Hub mode and Custom mode, and choose the right approach for your organization. +:page-topic-type: concept +:personas: evaluator, platform_admin, app_developer +:learning-objective-1: Differentiate between AI Hub and Custom gateway modes +:learning-objective-2: Determine which mode suits your use case based on configuration needs +:learning-objective-3: Identify which mode a gateway is running in using Console or API + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +AI Gateway supports two modes to accommodate different organizational needs: AI Hub mode for zero-configuration access and Custom mode for full control over routing and policies. + +After reading this page, you will be able to: + +* [ ] Differentiate between AI Hub and Custom gateway modes +* [ ] Determine which mode suits your use case based on configuration needs +* [ ] Identify which mode a gateway is running in using Console or API + +== Overview + +When you create a gateway, you choose between two modes that differ in configuration complexity and control: + +[cols="1,2,2",options="header"] +|=== +|Aspect |AI Hub Mode |Custom Mode + +|*Setup time* +|Minutes (just add API keys) +|Hours (configure everything) + +|*Backend pools* +|Pre-configured (6 pools) +|User creates from scratch + +|*Routing rules* +|Pre-configured (17 rules) +|User creates from scratch + +|*Transforms* +|Pre-configured (OpenAI compat) +|User configures + +|*API keys* +|Central (IT-managed) +|Central (IT-managed) + +|*Routing preferences* +|6 configurable toggles +|N/A (full control via rules) + +|*Modify backends* +|Cannot modify/delete +|Full control + +|*Custom routing rules* +|Not allowed (eject first) +|Full control + +|*Rate/spend limits* +|Can add custom rules +|Full control +|=== + +== AI Hub mode + +AI Hub mode provides instant, pre-configured access to OpenAI, Anthropic, and Google Gemini with zero setup complexity. + +=== What it is + +AI Hub mode eliminates complex LLM gateway configuration by providing pre-built routing rules and backend pools. Platform admins add provider credentials (OpenAI, Anthropic, Google Gemini) once, and all teams immediately benefit from intelligent routing across both providers. + +Teams adopting LLMs typically face significant friction: configuring backends and routing rules takes hours, different providers have incompatible APIs, and developers must learn each provider's quirks. AI Hub mode solves this by providing instant access—IT adds API keys once, all teams benefit immediately. + +=== Pre-configured components + +When you create an AI Hub gateway, you automatically get: + +*6 Backend Pools:* + +* OpenAI (standard requests) +* OpenAI Streaming (real-time streaming responses) +* Anthropic with OpenAI-compatible transform (standard requests) +* Anthropic with OpenAI-compatible transform (streaming) +* Anthropic Native (direct passthrough for `/v1/messages` endpoint) +* Anthropic Native Streaming (direct passthrough streaming) + +*17 Routing Rules* across 5 priority tiers: + +* Model prefix routing: `openai/*`, `anthropic/*` +* Model name pattern routing: `gpt-*`, `claude-*`, `o1-*` +* Special routing: embeddings, images, audio → OpenAI only +* Native SDK detection: `/v1/messages` → Anthropic passthrough +* Streaming detection → Extended timeout backends + +These rules are immutable and managed by Redpanda. You cannot modify or delete them, which ensures consistent, reliable behavior. + +=== User-configurable preferences + +While routing rules are managed, you can customize behavior through 6 preference toggles: + +include::ai-agents:partial$ai-hub-preference-toggles.adoc[] + +These preferences influence routing decisions without requiring you to write or maintain routing rules. + +=== Protected resources + +In AI Hub mode, system-managed resources are protected to ensure reliability: + +*Cannot be modified or deleted:* + +* Backend pool definitions +* Core routing rules +* Failover logic +* Provider selection algorithms + +*Can be configured:* + +* Provider credentials (OpenAI, Anthropic, Google Gemini) +* Preference toggles (6 available) +* Rate limits (within bounds) +* Spend limits + +This separation ensures that the underlying architecture remains stable and tested, while allowing customization of common preferences. + +=== Supported providers + +AI Hub mode currently supports: + +* *OpenAI* - `https://api.openai.com` with Bearer token authentication +* *Anthropic* - `https://api.anthropic.com` with x-api-key header + +Both providers work through a unified OpenAI-compatible API. AI Hub automatically transforms requests to Anthropic's native format when needed. + +Other providers like Google AI and AWS Bedrock are not yet supported in AI Hub mode. If you need these providers, use Custom mode instead. + +== Custom mode + +Custom mode provides full control over all aspects of gateway configuration, from routing rules to backend pools to policies. + +=== What it is + +In Custom mode, administrators configure every aspect of the gateway to meet specific requirements. You create backend pools, define routing rules using CEL expressions, configure failover behavior, and set up policies from scratch. + +This mode provides maximum flexibility for organizations with specialized requirements that AI Hub's pre-configured rules don't cover. + +=== When to use + +Choose Custom mode when you need: + +* Custom routing rules based on specific business logic (for example, route by customer tier, geography, or feature flags) +* Full control over backend pool configuration (custom timeouts, retries, health checks) +* Custom failover strategies (multi-region, specific fallback chains) +* Integration with custom infrastructure (Azure OpenAI, AWS Bedrock, self-hosted models) +* Complex routing logic that combines multiple conditions +* Specialized requirements not covered by AI Hub's 17 pre-configured rules + +Custom mode requires more setup time and maintenance, but provides complete flexibility. + +=== Configuration requirements + +In Custom mode, you must configure: + +* Backend pools: Create pools for each provider and model family +* Routing rules: Write CEL expressions to route requests +* Transforms: Configure request/response transforms if needed +* Rate limits: Define per-gateway, per-user, or per-model limits +* Spend limits: Set budget controls and alerting +* Observability: Configure logging and metrics + +For detailed setup instructions, see xref:ai-gateway/admin/setup-guide.adoc[]. + +== Decision matrix + +Use this decision matrix to choose the right mode for your use case: + +[cols="2,1,1",options="header"] +|=== +|Use Case |AI Hub |Custom + +|Quick start, just want to use LLMs +|✓ Recommended +| + +|Production with OpenAI and/or Anthropic +|✓ Recommended +|✓ Possible + +|Need custom routing rules +| +|✓ Required + +|Need custom provider (Azure OpenAI, AWS Bedrock) +| +|✓ Required + +|Complex routing logic +| +|✓ Required + +|Multi-region failover +| +|✓ Required + +|Started with AI Hub, now need full control +|Eject to Custom +|✓ Target mode + +|Minimize configuration complexity +|✓ Recommended +| + +|Need provider-specific API features +| +|✓ Required +|=== + +== Identify gateway mode + +You can identify which mode a gateway is running in through the Console or API. + +include::ai-agents:partial$ai-hub-mode-indicator.adoc[] + +== Eject to Custom mode + +Gateways can be ejected from AI Hub mode to Custom mode in a one-way transition. After ejection, all previously system-managed resources become user-configurable, and the gateway behaves exactly like a Custom mode gateway. + +=== What ejection means + +When you eject an AI Hub gateway to Custom mode: + +* `gateway.mode` changes from `ai_hub` to `custom` +* All resources: `ai_hub_managed` metadata set to `false` +* Backend pools become editable and deletable +* Routing rules become editable and deletable +* You can add custom routing rules +* No more automatic AI Hub version updates +* Preference toggles are removed (configure rules directly instead) + +[WARNING] +==== +Ejection is a one-way transition and cannot be undone. To get back to AI Hub mode, you must create a new gateway and migrate your applications to it. +==== + +=== When to eject + +Consider ejecting when: + +* You need custom routing rules that AI Hub doesn't support +* You want to modify or optimize backend pool configuration +* You need to integrate with providers not supported by AI Hub +* Your requirements have grown beyond AI Hub's capabilities +* You need provider-specific features not available through the unified API + +Most organizations start with AI Hub mode and eject to Custom mode only when they outgrow the pre-configured capabilities. + +=== Ejection process + +For detailed instructions on ejecting to Custom mode, see xref:ai-gateway/admin/eject-to-custom-mode.adoc[]. + +== Next steps + +Now that you understand gateway modes: + +*For Administrators:* + +* xref:ai-gateway/admin/configure-ai-hub.adoc[Configure AI Hub Gateway] - Set up AI Hub mode +* xref:ai-gateway/admin/setup-guide.adoc[Setup Guide] - Configure Custom mode +* xref:ai-gateway/admin/eject-to-custom-mode.adoc[Eject to Custom Mode] - Transition from AI Hub to Custom + +*For Builders:* + +* xref:ai-gateway/builders/use-ai-hub-gateway.adoc[Use AI Hub Gateway] - Connect to AI Hub gateways +* xref:ai-gateway/builders/discover-gateways.adoc[Discover Gateways] - Find available gateways +* xref:ai-gateway/builders/connect-your-agent.adoc[Connect Your Agent] - Integrate your application diff --git a/modules/ai-agents/partials/ai-hub/use-ai-hub-gateway.adoc b/modules/ai-agents/partials/ai-hub/use-ai-hub-gateway.adoc new file mode 100644 index 000000000..a582361b2 --- /dev/null +++ b/modules/ai-agents/partials/ai-hub/use-ai-hub-gateway.adoc @@ -0,0 +1,429 @@ += Use AI Hub Gateway +:description: Connect to and use AI Hub mode gateways with pre-configured intelligent routing. +:page-topic-type: how-to +:personas: app_developer +:learning-objective-1: Identify whether a gateway is running in AI Hub mode +:learning-objective-2: Connect your application to an AI Hub gateway using the OpenAI SDK +:learning-objective-3: Describe how intelligent routing directs requests to providers + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +AI Hub mode gateways provide instant access to OpenAI, Anthropic, and Google Gemini with pre-configured intelligent routing. As a builder, you benefit from zero-configuration access while your administrator manages provider credentials and routing policies. + +This page shows you how to discover AI Hub gateways, connect your applications, and verify your integration. + +After reading this page, you will be able to: + +* [ ] Identify whether a gateway is running in AI Hub mode +* [ ] Connect your application to an AI Hub gateway using the OpenAI SDK +* [ ] Describe how intelligent routing directs requests to providers + +== Before you begin + +* You have access to at least one AI Hub gateway (provided by your administrator) +* You have a Redpanda Cloud API key +* You have Python 3.8+ or Node.js 18+ installed (for code examples) + +== Identify an AI Hub gateway + +Gateways can operate in two modes: AI Hub mode or Custom mode. Understanding which mode your gateway uses helps you know what to expect. + +include::ai-agents:partial$ai-hub-mode-indicator.adoc[] + +== Understanding intelligent routing + +AI Hub mode provides pre-configured routing rules that automatically direct your requests to the right provider and backend pool. + +=== How requests are routed + +AI Hub evaluates routing rules in priority order: + +*Tier 1: Model Prefix* (Highest Priority) + +When you specify a model with a provider prefix, routing is deterministic: + +* `openai/gpt-5.2` → Always routes to OpenAI +* `anthropic/claude-sonnet-4.5` → Always routes to Anthropic + +*Tier 2: Model Name Pattern* + +When you specify a model without a prefix, AI Hub uses pattern matching: + +* `gpt-5.2` → Routes to OpenAI (matches `gpt-*` pattern) +* `claude-sonnet-4.5` → Routes to Anthropic (matches `claude-*` pattern) +* `o1-preview` → Routes to OpenAI (matches `o1-*` pattern) + +*Tier 3: Special Purpose Routing* + +Certain request types always route to specific providers: + +* Embeddings requests → OpenAI only (Anthropic doesn't support embeddings) +* Image generation (DALL-E) → OpenAI only +* Audio/speech requests (Whisper, TTS) → OpenAI only + +*Tier 4: Native SDK Detection* + +AI Hub detects which SDK you're using: + +* Requests to `/v1/messages` → Anthropic native API (no transformation) +* Requests to `/v1/chat/completions` → OpenAI-compatible API (with transformation if targeting Anthropic) + +*Tier 5: Streaming Detection* + +AI Hub automatically selects appropriate backends for streaming: + +* `stream: true` in request → Routes to streaming backend pools (extended timeout) +* `stream: false` or omitted → Routes to standard backend pools + +=== User preferences that affect routing + +Your administrator may have configured preference toggles that influence routing: + +[cols="2,3",options="header"] +|=== +|Preference |Effect on Your Requests + +|*infer_provider_from_model_name* +|When enabled (default), `gpt-5.2` routes to OpenAI, `claude-sonnet-4.5` routes to Anthropic without requiring vendor prefixes. When disabled, you must use explicit prefixes like `openai/gpt-5.2`. + +|*auto_route_vision* +|Requests with images automatically route to vision-capable models when enabled + +|*auto_route_long_context* +|Requests with >100K tokens automatically route to Anthropic models (200K context window) when enabled + +|*rate_limit_resilience* +|429 rate limit errors trigger automatic failover to alternate providers when enabled + +|*cost_optimization* +|When set to `"prefer_cheaper"`, routes simple requests to cost-effective models. When `"prefer_quality"`, prefers higher-quality models. + +|*fallback_provider* +|When model inference fails, determines which provider to use: `"openai"` (default), `"anthropic"`, or `"none"` (return error) +|=== + +You cannot modify these preferences directly. Contact your administrator if you need different routing behavior. + +=== What you cannot control + +Unlike Custom mode gateways, AI Hub gateways have protected resources: + +* *Cannot view routing rules*: Rules are managed by Redpanda +* *Cannot modify backend pools*: Pools are pre-configured and immutable +* *Cannot add custom rules*: AI Hub uses system-defined rules only + +If you need custom routing logic, ask your administrator about ejecting to Custom mode or creating a Custom mode gateway. + +== Available models + +AI Hub gateways expose models from OpenAI, Anthropic, and Google Gemini based on your administrator's provider configuration. + +=== OpenAI models + +Common OpenAI models available through AI Hub: + +* `openai/gpt-5.2` - Most capable OpenAI model, multimodal +* `openai/gpt-5.2-mini` - Cost-effective, fast +* `openai/o1-preview` - Advanced reasoning model +* `openai/o1-mini` - Cost-effective reasoning model +* `text-embedding-3-small` - Text embeddings (2048 dimensions) +* `text-embedding-3-large` - Text embeddings (3072 dimensions) + +=== Anthropic models + +Common Anthropic models available through AI Hub: + +* `anthropic/claude-opus-4.6` - Most capable Anthropic model +* `anthropic/claude-sonnet-4.5` - Balanced capability and cost +* `anthropic/claude-haiku` - Cost-effective, fast + +=== Unified API format + +All models use the OpenAI-compatible `/v1/chat/completions` endpoint. AI Hub automatically transforms requests to Anthropic's native format when needed. + +[,python] +---- +# Same code works for both providers +from openai import OpenAI + +client = OpenAI( + base_url="", + api_key="", +) + +# OpenAI model +response = client.chat.completions.create( + model="openai/gpt-5.2-mini", + messages=[{"role": "user", "content": "Hello"}] +) + +# Anthropic model (same API!) +response = client.chat.completions.create( + model="anthropic/claude-sonnet-4.5", + messages=[{"role": "user", "content": "Hello"}] +) +---- + +== Connect your application + +Connecting to an AI Hub gateway is identical to connecting to a Custom mode gateway. The only difference is that routing happens automatically based on AI Hub's pre-configured rules. + +=== Configuration requirements + +To connect your application, you need: + +* *Gateway Endpoint*: URL for API requests, with the gateway ID embedded in the path (for example, `https://example/gateways/gw_abc123/v1`) +* *Redpanda API Key*: Your authentication token + +Your administrator provides these values when they grant you access to the gateway. + +[tabs] +==== +Python:: ++ +[,python] +---- +from openai import OpenAI + +# Configure client +client = OpenAI( + base_url="", # Gateway endpoint + api_key="", # Redpanda API key +) + +# Send request +response = client.chat.completions.create( + model="openai/gpt-5.2-mini", # Vendor/model format + messages=[ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "What is AI Gateway?"} + ], + max_tokens=100 +) + +print(response.choices[0].message.content) +---- + +TypeScript:: ++ +[,typescript] +---- +import OpenAI from 'openai'; + +const client = new OpenAI({ + baseURL: '', + apiKey: process.env.REDPANDA_API_KEY, +}); + +const response = await client.chat.completions.create({ + model: 'anthropic/claude-sonnet-4.5', + messages: [ + { role: 'system', content: 'You are a helpful assistant.' }, + { role: 'user', content: 'What is AI Gateway?' } + ], + max_tokens: 100 +}); + +console.log(response.choices[0].message.content); +---- + +cURL:: ++ +[,bash] +---- +curl /chat/completions \ + -H "Content-Type: application/json" \ + -H "Authorization: Bearer ${REDPANDA_API_KEY}" \ + -d '{ + "model": "anthropic/claude-sonnet-4.5", + "messages": [ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "What is AI Gateway?"} + ], + "max_tokens": 100 + }' +---- +==== + +== Request patterns + +Follow these best practices when using AI Hub gateways. + +=== Model selection + +Always use the `vendor/model_id` format for explicit routing: + +[,python] +---- +# ✅ Recommended: Explicit provider +model = "openai/gpt-5.2" +model = "anthropic/claude-sonnet-4.5" + +# ⚠️ Works but relies on pattern matching +model = "gpt-5.2" # Routes to OpenAI via pattern matching +model = "claude-sonnet-4.5" # Routes to Anthropic via pattern matching +---- + +Explicit provider prefixes ensure deterministic routing and make your code more maintainable. + +=== Streaming requests + +Enable streaming for faster time-to-first-token: + +[,python] +---- +response = client.chat.completions.create( + model="openai/gpt-5.2-mini", + messages=[{"role": "user", "content": "Write a story"}], + stream=True # Enable streaming +) + +for chunk in response: + if chunk.choices[0].delta.content: + print(chunk.choices[0].delta.content, end='') +---- + +AI Hub automatically routes streaming requests to streaming backend pools with extended timeouts. + +=== Error handling + +Handle provider-specific errors: + +[,python] +---- +from openai import OpenAI, APIError, RateLimitError + +try: + response = client.chat.completions.create( + model="openai/gpt-5.2", + messages=[{"role": "user", "content": "Hello"}] + ) +except RateLimitError as e: + # Rate limit hit (429) + # If rate_limit_resilience enabled, AI Hub may have already retried + print(f"Rate limited: {e}") +except APIError as e: + # Other API errors + print(f"API error: {e}") +---- + +If your administrator enabled `rate_limit_resilience`, AI Hub automatically retries with alternate providers on 429 errors. + +== Test your integration + +Validate that your integration works correctly. + +=== Test connectivity + +Verify you can reach the gateway: + +[,bash] +---- +curl ${GATEWAY_ENDPOINT}/models \ + -H "Authorization: Bearer ${REDPANDA_API_KEY}" +---- + +Expected: List of available models from configured providers. + +=== Test a simple request + +Send a minimal request: + +[,bash] +---- +curl ${GATEWAY_ENDPOINT}/chat/completions \ + -H "Authorization: Bearer ${REDPANDA_API_KEY}" \ + -H "Content-Type: application/json" \ + -d '{ + "model": "openai/gpt-5.2-mini", + "messages": [{"role": "user", "content": "Say hello"}], + "max_tokens": 10 + }' +---- + +Expected: Successful completion with response content. + +=== Verify in observability + +Check the observability dashboard to confirm your requests are logged: + +. Navigate to *AI Gateway* → *Gateways* → Your Gateway. +. Click *Observability*. +. Verify your test requests appear in the logs. +. Check token usage and estimated cost. + +== Differences from Custom mode + +Understanding how AI Hub differs from Custom mode helps you set appropriate expectations. + +=== What works the same + +*API behavior:* + +* Same `/v1/chat/completions` endpoint +* Same request/response format +* Same authentication (Redpanda API key) +* Same observability dashboard + +*Discovery process:* + +* Same gateway discovery (Console or API) +* Same connectivity testing +* Same error handling patterns + +=== What is different + +*Routing visibility:* + +* *AI Hub*: Cannot view or modify routing rules (managed by system) +* *Custom*: Can view and modify all routing rules + +*Backend pools:* + +* *AI Hub*: 6 pre-configured pools, cannot modify +* *Custom*: User-defined pools, full control + +*Customization:* + +* *AI Hub*: Limited to 6 preference toggles +* *Custom*: Unlimited custom CEL routing rules + +*Provider support:* + +* *AI Hub*: OpenAI, Anthropic, and Google Gemini only +* *Custom*: Any provider (Azure OpenAI, AWS Bedrock, custom endpoints) + +=== Limitations + +AI Hub mode has intentional limitations: + +* Cannot add custom routing rules +* Cannot modify backend pool configuration +* Cannot integrate with providers other than OpenAI, Anthropic, and Google Gemini +* Cannot use provider-specific API features not exposed through unified API + +If you encounter these limitations, ask your administrator about Custom mode. + +== When to request Custom mode + +Consider requesting Custom mode or gateway ejection when: + +* You need custom routing logic (for example, route by customer tier, geography) +* You need to integrate with Azure OpenAI or AWS Bedrock +* You need provider-specific features not available through the unified API +* AI Hub's automatic routing doesn't match your requirements +* You need visibility into routing rules for debugging + +Discuss your requirements with your administrator. They can either: + +* Adjust AI Hub preference toggles to meet your needs +* Eject the gateway to Custom mode (one-way transition) +* Create a new Custom mode gateway for your team + +== Next steps + +Now that you're using an AI Hub gateway: + +* xref:ai-agents:ai-gateway/builders/connect-your-agent.adoc[Connect Your Agent] - Integrate AI agents with advanced patterns +* xref:ai-agents:ai-gateway/builders/discover-gateways.adoc[Discover Gateways] - Find other available gateways +* xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[MCP Aggregation] - Use tool aggregation with AI agents diff --git a/modules/ai-agents/partials/integrations/claude-code-admin.adoc b/modules/ai-agents/partials/integrations/claude-code-admin.adoc new file mode 100644 index 000000000..1e9559182 --- /dev/null +++ b/modules/ai-agents/partials/integrations/claude-code-admin.adoc @@ -0,0 +1,498 @@ += Configure AI Gateway for Claude Code +:description: Configure Redpanda AI Gateway to support Claude Code clients. +:page-topic-type: how-to +:personas: platform_admin +:learning-objective-1: Configure AI Gateway endpoints for Claude Code connectivity +:learning-objective-2: Set up authentication and access control for Claude Code clients +:learning-objective-3: Deploy MCP tool aggregation for Claude Code tool discovery + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +Configure Redpanda AI Gateway to support Claude Code clients accessing LLM providers and MCP tools through a unified endpoint. + +After reading this page, you will be able to: + +* [ ] Configure AI Gateway endpoints for Claude Code connectivity. +* [ ] Set up authentication and access control for Claude Code clients. +* [ ] Deploy MCP tool aggregation for Claude Code tool discovery. + +== Prerequisites + +* AI Gateway deployed on a BYOC cluster running Redpanda version 25.3 or later +* Administrator access to the AI Gateway UI +* At least one LLM provider API key (OpenAI, Anthropic, or Google Gemini) +* Understanding of xref:ai-agents:ai-gateway/gateway-architecture.adoc[AI Gateway concepts] + +== Architecture overview + +Claude Code connects to AI Gateway through two primary endpoints: + +* LLM endpoint: `https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1` for chat completions +* MCP endpoint: `https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/mcp` for tool discovery and execution + +The gateway handles: + +. Authentication via bearer tokens in the `Authorization` header +. Gateway selection via the endpoint URL +. Model routing using the `vendor/model_id` format +. MCP server aggregation for multi-tool workflows +. Request logging and cost tracking per gateway + +== Enable LLM providers + +Claude Code requires access to LLM providers through the gateway. Enable at least one provider. + +=== Configure Anthropic + +Claude Code uses Anthropic models by default. To enable Anthropic: + +. Navigate to *AI Gateway* > *Providers* in the Redpanda Cloud console +. Select *Anthropic* from the provider list +. Click *Add configuration* +. Enter your Anthropic API key +. Click *Save* + +The gateway can now route requests to Anthropic models. + +=== Configure OpenAI + +To enable OpenAI as a provider: + +. Navigate to *AI Gateway* > *Providers* +. Select *OpenAI* from the provider list +. Click *Add configuration* +. Enter your OpenAI API key +. Click *Save* + +=== Enable models in the catalog + +After enabling providers, enable specific models: + +. Navigate to *AI Gateway* > *Models* +. Enable the models you want Claude Code clients to access ++ +Common models for Claude Code: ++ +* `anthropic/claude-opus-4.6-5` +* `anthropic/claude-sonnet-4.5` +* `openai/gpt-5.2` +* `openai/o1-mini` + +. Click *Save* + +Models appear in the catalog with the `vendor/model_id` format that Claude Code uses in requests. + +== Create a gateway for Claude Code clients + +Create a dedicated gateway to isolate Claude Code traffic and apply specific policies. + +=== Gateway configuration + +. Navigate to *AI Gateway* > *Gateways* +. Click *Create Gateway* +. Enter gateway details: ++ +[cols="1,2"] +|=== +|Field |Value + +|Name +|`claude-code-gateway` (or your preferred name) + +|Workspace +|Select the workspace for access control grouping + +|Description +|Gateway for Claude Code IDE clients +|=== + +. Click *Create* +. Copy the gateway ID from the gateway details page + +The gateway ID is embedded in the gateway endpoint URL. + +=== Configure LLM routing + +Set up routing policies for Claude Code requests. + +==== Basic routing with failover + +Configure a primary provider with automatic failover: + +. Navigate to the gateway's *LLM* tab +. Under *Routing*, click *Add route* +. Configure the route: ++ +[source,cel] +---- +true # Matches all requests +---- + +. Add a *Primary provider pool*: ++ +* Provider: Anthropic +* Model: All enabled Anthropic models +* Load balancing: Round robin (if multiple Anthropic configurations exist) + +. Add a *Fallback provider pool*: ++ +* Provider: OpenAI +* Model: All enabled OpenAI models +* Failover conditions: Rate limits, timeouts, 5xx errors + +. Click *Save* + +Claude Code requests route to Anthropic by default and fail over to OpenAI if Anthropic is unavailable. + +==== User-based routing + +Route requests based on user identity (if Claude Code passes user identifiers): + +[source,cel] +---- +request.headers["x-user-tier"][0] == "premium" +---- + +Create separate routes: + +* Premium route: Claude Opus 4.6.5 (highest quality) +* Standard route: Claude Sonnet 4.5 (balanced cost and quality) + +=== Apply rate limits + +Prevent runaway usage from Claude Code clients: + +. Navigate to the gateway's *LLM* tab +. Under *Rate Limit*, configure: ++ +[cols="1,2"] +|=== +|Setting |Recommended Value + +|Global rate limit +|100 requests per minute + +|Per-user rate limit +|10 requests per minute (if using user headers) +|=== + +. Click *Save* + +The gateway blocks requests exceeding these limits and returns HTTP 429 errors. + +=== Set spending limits + +Control LLM costs: + +. Under *Spend Limit*, configure: ++ +[cols="1,2"] +|=== +|Setting |Value + +|Monthly budget +|$5,000 (adjust based on expected usage) + +|Enforcement +|Block requests after budget exceeded +|=== + +. Click *Save* + +The gateway tracks estimated costs per request and blocks traffic when the monthly budget is exhausted. + +== Configure MCP tool aggregation + +Enable Claude Code to discover and use tools from multiple MCP servers through a single endpoint. + +=== Add MCP servers + +. Navigate to the gateway's *MCP* tab +. Click *Add MCP Server* +. Enter server details: ++ +[cols="1,2"] +|=== +|Field |Value + +|Display name +|Descriptive name (for example, `redpanda-data-catalog`) + +|Endpoint URL +|MCP server endpoint (for example, xref:ai-agents:mcp/remote/overview.adoc[Remote MCP server] URL) + +|Authentication +|Bearer token or other authentication mechanism +|=== + +. Click *Save* + +Repeat for each MCP server you want to aggregate. + +=== Enable deferred tool loading + +Reduce token costs by deferring tool discovery: + +. Under *MCP Settings*, enable *Deferred tool loading* +. Click *Save* + +When enabled: + +* Claude Code initially receives only a search tool and orchestrator tool +* Claude Code queries for specific tools by name when needed +* Token usage decreases by 80-90% for agents with many tools configured + +=== Add the MCP orchestrator + +The MCP orchestrator reduces multi-step workflows to single calls: + +. Under *MCP Settings*, enable *MCP Orchestrator* +. Configure: ++ +[cols="1,2"] +|=== +|Setting |Value + +|Orchestrator model +|Select a model with strong code generation capabilities (for example, `anthropic/claude-sonnet-4.5`) + +|Execution timeout +|30 seconds +|=== + +. Click *Save* + +Claude Code can now invoke the orchestrator tool to execute complex, multi-step operations in a single request. + +== Configure authentication + +Claude Code clients authenticate using bearer tokens. + +=== Generate API tokens + +. Navigate to *Security* > *API Tokens* in the Redpanda Cloud console +. Click *Create Token* +. Enter token details: ++ +[cols="1,2"] +|=== +|Field |Value + +|Name +|`claude-code-access` + +|Scopes +|`ai-gateway:read`, `ai-gateway:write` + +|Expiration +|Set appropriate expiration based on security policies +|=== + +. Click *Create* +. Copy the token (it appears only once) + +Distribute this token to Claude Code users through secure channels. + +=== Token rotation + +Implement token rotation for security: + +. Create a new token before the existing token expires +. Distribute the new token to users +. Monitor usage of the old token in (observability dashboard) +. Revoke the old token after all users have migrated + +== Configure Claude Code clients + +Provide these instructions to users configuring Claude Code. + +=== CLI configuration + +Users can configure Claude Code using the CLI: + +[source,bash] +---- +claude mcp add \ + --transport http \ + redpanda-aigateway \ + https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/mcp \ + --header "Authorization: Bearer YOUR_API_TOKEN" +---- + +Replace: + +* `{CLUSTER_ID}`: Your Redpanda cluster ID +* `YOUR_API_TOKEN`: The API token generated earlier + +=== Configuration file + +Alternatively, users can edit `~/.claude.json` (user-level) or `.mcp.json` (project-level): + +[source,json] +---- +{ + "mcpServers": { + "redpanda-ai-gateway": { + "type": "http", + "url": "https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/mcp", + "headers": { + "Authorization": "Bearer YOUR_API_TOKEN" + } + } + } +} +---- + +This configuration: + +* Connects Claude Code to the aggregated MCP endpoint +* Includes authentication headers + +== Monitor Claude Code usage + +Track Claude Code activity through gateway observability features. + +=== View request logs + +. Navigate to *AI Gateway* > *Observability* > *Logs* +. Filter by gateway ID: `claude-code-gateway` +. Review: ++ +* Request timestamps and duration +* Model used per request +* Token usage (prompt and completion tokens) +* Estimated cost per request +* HTTP status codes and errors + +=== Analyze metrics + +. Navigate to *AI Gateway* > *Observability* > *Metrics* +. Select the Claude Code gateway +. Review: ++ +[cols="1,2"] +|=== +|Metric |Purpose + +|Request volume +|Identify usage patterns and peak times + +|Token usage +|Track consumption trends + +|Estimated spend +|Monitor costs against budget + +|Latency (p50, p95, p99) +|Detect performance issues + +|Error rate +|Identify failing requests or misconfigured clients +|=== + + +=== Query logs via API + +Programmatically access logs for integration with monitoring systems: + +[source,bash] +---- +curl https://{CLUSTER_ID}.cloud.redpanda.com/api/ai-gateway/logs \ + -H "Authorization: Bearer YOUR_API_TOKEN" \ + -H "Content-Type: application/json" \ + -d '{ + "gateway_id": "GATEWAY_ID", + "start_time": "2026-01-01T00:00:00Z", + "end_time": "2026-01-14T23:59:59Z", + "limit": 100 + }' +---- + +== Security considerations + +Apply these security best practices for Claude Code deployments. + +=== Limit token scope + +Create tokens with minimal required scopes: + +* `ai-gateway:read`: Required for MCP tool discovery +* `ai-gateway:write`: Required for LLM requests and tool execution + +Avoid granting broader scopes like `admin` or `cluster:write`. + +=== Implement network restrictions + +If Claude Code clients connect from known IP ranges, configure network policies: + +. Use cloud provider security groups to restrict access to AI Gateway endpoints +. Allowlist only the IP ranges where Claude Code clients operate +. Monitor for unauthorized access attempts in request logs + +=== Enforce token expiration + +Set short token lifetimes for high-security environments: + +* Development environments: 90 days +* Production environments: 30 days + +Automate token rotation to reduce manual overhead. + +=== Audit tool access + +Review which MCP tools Claude Code clients can access: + +. Periodically audit the MCP servers configured in the gateway +. Remove unused or deprecated MCP servers +. Monitor tool execution logs for unexpected behavior + +== Troubleshooting + +Common issues and solutions when configuring AI Gateway for Claude Code. + +=== Claude Code cannot connect to gateway + +Symptom: Connection errors when Claude Code tries to discover tools or send LLM requests. + +Causes and solutions: + +* **Invalid gateway endpoint**: Verify the gateway endpoint URL matches the endpoint from the console +* **Expired token**: Generate a new API token and update the Claude Code configuration +* **Network connectivity**: Verify the cluster endpoint is accessible from the client network +* **Provider not enabled**: Ensure at least one LLM provider is enabled and has models in the catalog + +=== Tools not appearing in Claude Code + +Symptom: Claude Code does not discover MCP tools. + +Causes and solutions: + +* **MCP servers not configured**: Add MCP server endpoints in the gateway's MCP tab +* **Deferred loading enabled but search failing**: Check that the search tool is correctly configured +* **MCP server authentication failing**: Verify MCP server authentication credentials in the gateway configuration + +=== High costs or token usage + +Symptom: Token usage and costs exceed expectations. + +Causes and solutions: + +* **Deferred tool loading disabled**: Enable deferred tool loading to reduce tokens by 80-90% +* **No rate limits**: Apply per-minute rate limits to prevent runaway usage +* **Missing spending limits**: Set monthly budget limits with blocking enforcement +* **Expensive models**: Route to cost-effective models (for example, Claude Sonnet instead of Opus) for non-critical requests + +=== Requests failing with 429 errors + +Symptom: Claude Code receives HTTP 429 Too Many Requests errors. + +Causes and solutions: + +* **Rate limit exceeded**: Review and increase rate limits if usage is legitimate +* **Upstream provider rate limits**: Check if the upstream LLM provider is rate-limiting; configure failover pools +* **Budget exhausted**: Verify monthly spending limit has not been reached + +== Next steps + +* xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[]: Implement advanced routing rules +* xref:ai-agents:mcp/remote/overview.adoc[]: Deploy Remote MCP servers for custom tools diff --git a/modules/ai-agents/partials/integrations/claude-code-user.adoc b/modules/ai-agents/partials/integrations/claude-code-user.adoc new file mode 100644 index 000000000..9f3358485 --- /dev/null +++ b/modules/ai-agents/partials/integrations/claude-code-user.adoc @@ -0,0 +1,409 @@ += Configure Claude Code with AI Gateway +:description: Configure Claude Code to use Redpanda AI Gateway for unified LLM access and MCP tool aggregation. +:page-topic-type: how-to +:personas: ai_agent_developer, app_developer +:learning-objective-1: Configure Claude Code to connect to AI Gateway endpoints +:learning-objective-2: Set up MCP server integration through AI Gateway +:learning-objective-3: Verify Claude Code is routing requests through the gateway + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +After xref:ai-agents:ai-gateway/gateway-quickstart.adoc[configuring your AI Gateway], set up Claude Code to route LLM requests and access MCP tools through the gateway's unified endpoints. + +After reading this page, you will be able to: + +* [ ] Configure Claude Code to connect to AI Gateway endpoints. +* [ ] Set up MCP server integration through AI Gateway. +* [ ] Verify Claude Code is routing requests through the gateway. + +== Prerequisites + +Before configuring Claude Code, ensure you have: + +* Claude Code CLI installed (download from https://github.com/anthropics/claude-code[Anthropic's GitHub^]) +* An active Redpanda AI Gateway with: +** At least one LLM provider enabled (see xref:ai-agents:ai-gateway/gateway-quickstart.adoc#step-1-enable-a-provider[Enable a provider]) +** A gateway created and configured (see xref:ai-agents:ai-gateway/gateway-quickstart.adoc#step-3-create-a-gateway[Create a gateway]) +* Your AI Gateway credentials: +** Gateway endpoint URL (for example, `\https://gw-abc123.ai.panda.com`) +** API key with access to the gateway + +== Configuration methods + +Claude Code supports two configuration approaches for connecting to AI Gateway: + +[cols="1,2,2"] +|=== +|Method |Best for |Trade-offs + +|CLI command +|Quick setup, single gateway +|Must re-run if configuration changes + +|Configuration file +|Multiple gateways, complex setups, version control +|Manual file editing required +|=== + +Choose the method that matches your workflow. The CLI command is faster for getting started, while the configuration file provides more flexibility for production use. + +== Configure using CLI + +The `claude mcp add` command configures Claude Code to connect to your AI Gateway's MCP endpoint. + +=== Add MCP server connection + +[,bash] +---- +claude mcp add \ + --transport http \ + redpanda-aigateway \ + /mcp \ + --header "Authorization: Bearer YOUR_API_KEY" +---- + +Replace the following values: + +* `/mcp` - Your gateway's MCP endpoint +* `YOUR_API_KEY` - Your Redpanda API key + +This command configures the HTTP transport for MCP, which allows Claude Code to discover and invoke tools from all MCP servers configured in your gateway. + +=== Configure LLM routing through gateway + +To route Claude Code's LLM requests through the gateway instead of directly to Anthropic: + +[,bash] +---- +claude config set \ + --api-provider redpanda \ + --base-url +---- + +This routes all Claude model requests through your gateway, giving you centralized observability and policy enforcement. + +== Configure using configuration file + +For more complex configurations or when managing multiple gateways, edit the Claude Code configuration file directly. + +=== Locate configuration file + +Claude Code stores configuration in: + +* macOS/Linux: `~/.claude.json` (user-level) or `.mcp.json` (project-level) +* Windows: `%USERPROFILE%\.claude.json` + +=== Basic configuration + +Create or edit `~/.claude.json` with the following structure: + +[,json] +---- +{ + "mcpServers": { + "redpanda-ai-gateway": { + "type": "http", + "url": "/mcp", + "headers": { + "Authorization": "Bearer YOUR_API_KEY" + } + } + } +} +---- + +Replace placeholder values: + +* `` - Your gateway endpoint URL +* `YOUR_API_KEY` - Your Redpanda API key + +=== Multiple gateway configuration + +To configure different gateways for development and production: + +[,json] +---- +{ + "mcpServers": { + "redpanda-staging": { + "type": "http", + "url": "/mcp", + "headers": { + "Authorization": "Bearer STAGING_API_KEY" + } + }, + "redpanda-production": { + "type": "http", + "url": "/mcp", + "headers": { + "Authorization": "Bearer PROD_API_KEY" + } + } + } +} +---- + +Switch between gateways by selecting the appropriate MCP server when using Claude Code. + +=== Configuration with environment variables + +For sensitive credentials, use environment variables instead of hardcoding values: + +[,json] +---- +{ + "mcpServers": { + "redpanda-ai-gateway": { + "type": "http", + "url": "${REDPANDA_GATEWAY_URL}/mcp", + "headers": { + "Authorization": "Bearer ${REDPANDA_API_KEY}" + } + } + } +} +---- + +NOTE: Claude Code supports `${VAR}` interpolation syntax in the `mcpServers` section. The variables `REDPANDA_GATEWAY_URL` and `REDPANDA_API_KEY` will be resolved from environment variables at runtime. + +Set environment variables before launching Claude Code: + +[,bash] +---- +export REDPANDA_GATEWAY_URL="" +export REDPANDA_API_KEY="your-api-key" +---- + +On Windows (PowerShell): + +[,powershell] +---- +$env:REDPANDA_GATEWAY_URL = "" +$env:REDPANDA_API_KEY = "your-api-key" +---- + +== Verify configuration + +After configuring Claude Code, verify it connects correctly to your AI Gateway. + +=== Test MCP tool discovery + +List available MCP tools to confirm Claude Code can reach your gateway's MCP endpoint: + +[,bash] +---- +claude mcp list +---- + +Expected output should show: + +* The `redpanda-ai-gateway` server connection +* Status: Connected +* Available tools from your configured MCP servers + +If deferred tool loading is enabled in your gateway, you'll see a search tool and the MCP orchestrator tool instead of all tools upfront. + +=== Verify gateway routing + +Check that requests route through the gateway by monitoring the AI Gateway dashboard: + +. Open the Redpanda Cloud Console +. Navigate to your gateway's observability dashboard +. Send a test request from Claude Code: ++ +[,bash] +---- +echo "Write a simple Python hello world function" | claude +---- + +. Refresh the dashboard and verify: +** Request appears in the logs +** Model shows as `anthropic/claude-sonnet-4.5` (or your configured model) +** Request succeeded (status 200) +** Token usage and estimated cost are recorded + +If the request doesn't appear in the dashboard, see <>. + +== Advanced configuration + +=== Custom request timeout + +Configure timeout for MCP requests in the configuration file: + +[,json] +---- +{ + "mcpServers": { + "redpanda-ai-gateway": { + "type": "http", + "url": "/mcp", + "headers": { + "Authorization": "Bearer YOUR_API_KEY" + }, + "timeout": 30000 + } + } +} +---- + +The `timeout` value is in milliseconds. Default is 10000 (10 seconds). Increase this for MCP tools that perform long-running operations. + +=== Debug mode + +Enable debug logging to troubleshoot connection issues: + +[,bash] +---- +export CLAUDE_DEBUG=1 +claude +---- + +Debug mode shows: + +* HTTP request and response headers +* MCP tool discovery messages +* Gateway routing decisions (if exposed in response headers) +* Error details + +[[troubleshooting]] +== Troubleshooting + +=== MCP server not connecting + +**Symptom**: `claude mcp list` shows "Connection failed" or no tools available. + +**Causes and solutions**: + +. **Incorrect endpoint URL** ++ +Verify your MCP endpoint is correct. It should be `{gateway-url}/mcp`, not just `{gateway-url}`. ++ +[,bash] +---- +# Correct +/mcp + +# Incorrect + +---- + +. **Authentication failure** ++ +Check that your API key is valid and has access to the gateway: ++ +[,bash] +---- +curl -H "Authorization: Bearer YOUR_API_KEY" \ + /mcp +---- ++ +You should receive a valid MCP protocol response. If you get `401 Unauthorized`, regenerate your API key in the Redpanda Cloud Console. + +. **Gateway endpoint URL mismatch** ++ +Verify your gateway endpoint URL matches exactly. Copy it directly from the AI Gateway UI rather than typing it manually. + +. **Network connectivity issues** ++ +Test basic connectivity to the gateway endpoint: ++ +[,bash] +---- +curl -I /mcp +---- ++ +If this times out, check your network configuration, firewall rules, or VPN connection. + +=== Requests not appearing in gateway dashboard + +**Symptom**: Claude Code works, but you don't see requests in the AI Gateway observability dashboard. + +**Causes and solutions**: + +. **Wrong gateway configured** ++ +Verify that the gateway endpoint URL in your configuration matches the gateway you're viewing in the dashboard. + +. **Log ingestion delay** ++ +Gateway logs can take 5-10 seconds to appear in the dashboard. Wait briefly and refresh. + +. **Model name format error** ++ +Ensure requests use the `vendor/model_id` format (for example, `anthropic/claude-sonnet-4.5`), not just the model name (for example, `claude-sonnet-4.5`). + +=== High latency after gateway integration + +**Symptom**: Requests are slower after routing through the gateway. + +**Causes and solutions**: + +. **Gateway geographic distance** ++ +If your gateway is in a different region than you or the upstream provider, this adds network latency. Check gateway region in the Redpanda Cloud Console. + +. **Provider pool failover** ++ +If your gateway is configured with fallback providers, check the logs to see if requests are failing over. Failover adds latency. + +. **MCP tool aggregation overhead** ++ +Aggregating tools from multiple MCP servers adds processing time. Use deferred tool loading to reduce this overhead (see xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[]). + +. **Rate limiting** ++ +If you're hitting rate limits, the gateway may be queuing requests. Check the observability dashboard for rate limit metrics. + +=== Configuration file not loading + +**Symptom**: Changes to `.claude.json` don't take effect. + +**Solutions**: + +. **Restart Claude Code** ++ +Configuration changes require restarting Claude Code: ++ +[,bash] +---- +# Kill any running Claude Code processes +pkill claude + +# Start Claude Code again +claude +---- + +. **Validate JSON syntax** ++ +Ensure your `.claude.json` is valid JSON. Use a JSON validator: ++ +[,bash] +---- +python3 -m json.tool ~/.claude.json +---- + +. **Check file permissions** ++ +Verify Claude Code can read the configuration file: ++ +[,bash] +---- +ls -la ~/.claude.json +---- ++ +The file should be readable by your user. If not, fix permissions: ++ +[,bash] +---- +chmod 600 ~/.claude.json +---- + +== Next steps + +* xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[]: Configure deferred tool loading to reduce token costs +* xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[]: Use CEL expressions to route Claude Code requests based on context + +== Related pages + +* xref:ai-agents:ai-gateway/gateway-quickstart.adoc[]: Create and configure your AI Gateway +* xref:ai-agents:ai-gateway/gateway-architecture.adoc[]: Learn about AI Gateway architecture and benefits diff --git a/modules/ai-agents/partials/integrations/cline-admin.adoc b/modules/ai-agents/partials/integrations/cline-admin.adoc new file mode 100644 index 000000000..12b63b345 --- /dev/null +++ b/modules/ai-agents/partials/integrations/cline-admin.adoc @@ -0,0 +1,579 @@ += Configure AI Gateway for Cline +:description: Configure Redpanda AI Gateway to support Cline clients. +:page-topic-type: how-to +:personas: platform_admin +:learning-objective-1: Configure AI Gateway endpoints for Cline connectivity +:learning-objective-2: Set up authentication and access control for Cline clients +:learning-objective-3: Deploy MCP tool aggregation for Cline tool discovery + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +Configure Redpanda AI Gateway to support Cline (formerly Claude Dev) clients accessing LLM providers and MCP tools through a unified endpoint. + +After reading this page, you will be able to: + +* [ ] Configure AI Gateway endpoints for Cline connectivity. +* [ ] Set up authentication and access control for Cline clients. +* [ ] Deploy MCP tool aggregation for Cline tool discovery. + +== Prerequisites + +* AI Gateway deployed on a BYOC cluster running Redpanda version 25.3 or later +* Administrator access to the AI Gateway UI +* At least one LLM provider API key (Anthropic or OpenAI) +* Understanding of xref:ai-agents:ai-gateway/gateway-architecture.adoc[AI Gateway concepts] + +== About Cline + +Cline is a VS Code extension designed for autonomous AI development workflows. It connects to Claude models through the native Anthropic API format, sending requests to `/v1/messages` endpoints. Cline supports long-running tasks, browser integration, and autonomous operations, with full MCP support for tool discovery and execution. + +Key characteristics: + +* Uses native Anthropic format (compatible with OpenAI-compatible endpoints) +* Designed for autonomous, multi-step workflows +* Supports MCP protocol for external tool integration +* Operates as a VS Code extension with persistent context +* Requires configuration similar to Claude Code + +== Architecture overview + +Cline connects to AI Gateway through two primary endpoints: + +* LLM endpoint: `https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1` for chat completions +* MCP endpoint: `https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/mcp` for tool discovery and execution + +The gateway handles: + +. Authentication via bearer tokens in the `Authorization` header +. Model routing using the `vendor/model_id` format +. MCP server aggregation for multi-tool workflows +. Request logging and cost tracking per gateway + +== Enable LLM providers + +Cline requires access to LLM providers through the gateway. Enable at least one provider. + +=== Configure Anthropic + +Cline uses Anthropic models by default. To enable Anthropic: + +. Navigate to *AI Gateway* > *Providers* in the Redpanda Cloud console +. Select *Anthropic* from the provider list +. Click *Add configuration* +. Enter your Anthropic API key +. Click *Save* + +The gateway can now route requests to Anthropic models. + +=== Configure OpenAI + +To enable OpenAI as a provider: + +. Navigate to *AI Gateway* > *Providers* +. Select *OpenAI* from the provider list +. Click *Add configuration* +. Enter your OpenAI API key +. Click *Save* + +=== Enable models in the catalog + +After enabling providers, enable specific models: + +. Navigate to *AI Gateway* > *Models* +. Enable the models you want Cline clients to access ++ +Common models for Cline: ++ +* `anthropic/claude-opus-4.6-5` +* `anthropic/claude-sonnet-4.5` +* `openai/gpt-5.2` +* `openai/o1-mini` + +. Click *Save* + +Models appear in the catalog with the `vendor/model_id` format that Cline uses in requests. + +== Create a gateway for Cline clients + +Create a dedicated gateway to isolate Cline traffic and apply specific policies. + +=== Gateway configuration + +. Navigate to *AI Gateway* > *Gateways* +. Click *Create Gateway* +. Enter gateway details: ++ +[cols="1,2"] +|=== +|Field |Value + +|Name +|`cline-gateway` (or your preferred name) + +|Workspace +|Select the workspace for access control grouping + +|Description +|Gateway for Cline VS Code extension clients +|=== + +. Click *Create* +. Copy the gateway endpoint URL from the gateway details page + +=== Configure LLM routing + +Set up routing policies for Cline requests. + +==== Basic routing with failover + +Configure a primary provider with automatic failover: + +. Navigate to the gateway's *LLM* tab +. Under *Routing*, click *Add route* +. Configure the route: ++ +[source,cel] +---- +true # Matches all requests +---- + +. Add a *Primary provider pool*: ++ +* Provider: Anthropic +* Model: All enabled Anthropic models +* Load balancing: Round robin (if multiple Anthropic configurations exist) + +. Add a *Fallback provider pool*: ++ +* Provider: OpenAI +* Model: All enabled OpenAI models +* Failover conditions: Rate limits, timeouts, 5xx errors + +. Click *Save* + +Cline requests route to Anthropic by default and fail over to OpenAI if Anthropic is unavailable. + +==== Workspace-based routing + +Route requests based on VS Code workspace or project context (if Cline passes workspace identifiers): + +[source,cel] +---- +request.headers["x-workspace-type"][0] == "production" +---- + +Create separate routes: + +* Production route: Claude Opus 4.6.5 (highest quality, critical code) +* Development route: Claude Sonnet 4.5 (balanced cost and quality) +* Experimental route: OpenAI GPT-5.2 (cost-effective testing) + +=== Apply rate limits + +Prevent runaway usage from autonomous Cline sessions: + +. Navigate to the gateway's *LLM* tab +. Under *Rate Limit*, configure: ++ +[cols="1,2"] +|=== +|Setting |Recommended Value + +|Global rate limit +|120 requests per minute + +|Per-user rate limit +|15 requests per minute (if using user headers) +|=== ++ +Cline can generate multiple requests during autonomous operations. Higher limits than typical interactive clients may be necessary. + +. Click *Save* + +The gateway blocks requests exceeding these limits and returns HTTP 429 errors. + +=== Set spending limits + +Control LLM costs during autonomous operations: + +. Under *Spend Limit*, configure: ++ +[cols="1,2"] +|=== +|Setting |Value + +|Monthly budget +|$8,000 (adjust based on expected autonomous usage) + +|Enforcement +|Block requests after budget exceeded +|=== ++ +Autonomous operations can consume significant tokens. Monitor spending patterns after deployment. + +. Click *Save* + +The gateway tracks estimated costs per request and blocks traffic when the monthly budget is exhausted. + +== Configure MCP tool aggregation + +Enable Cline to discover and use tools from multiple MCP servers through a single endpoint. + +=== Add MCP servers + +. Navigate to the gateway's *MCP* tab +. Click *Add MCP Server* +. Enter server details: ++ +[cols="1,2"] +|=== +|Field |Value + +|Display name +|Descriptive name (for example, `filesystem-tools`, `code-analysis-tools`) + +|Endpoint URL +|MCP server endpoint (for example, xref:ai-agents:mcp/remote/overview.adoc[Remote MCP server] URL) + +|Authentication +|Bearer token or other authentication mechanism +|=== + +. Click *Save* + +Repeat for each MCP server you want to aggregate. + +=== Enable deferred tool loading + +Reduce token costs for Cline sessions with many available tools: + +. Under *MCP Settings*, enable *Deferred tool loading* +. Click *Save* + +When enabled: + +* Cline initially receives only a search tool and orchestrator tool +* Cline queries for specific tools by name when needed +* Token usage decreases by 80-90% for configurations with many tools + +This is particularly important for Cline because autonomous operations can make many tool discovery calls. + +=== Add the MCP orchestrator + +The MCP orchestrator reduces multi-step autonomous workflows to single calls: + +. Under *MCP Settings*, enable *MCP Orchestrator* +. Configure: ++ +[cols="1,2"] +|=== +|Setting |Value + +|Orchestrator model +|Select a model with strong code generation capabilities (for example, `anthropic/claude-sonnet-4.5`) + +|Execution timeout +|45 seconds +|=== ++ +Longer timeout than typical interactive clients allows complex autonomous operations to complete. + +. Click *Save* + +Cline can now invoke the orchestrator tool to execute complex, multi-step operations in a single request, which is ideal for autonomous development workflows. + +== Configure authentication + +Cline clients authenticate using bearer tokens. + +=== Generate API tokens + +. Navigate to *Security* > *API Tokens* in the Redpanda Cloud console +. Click *Create Token* +. Enter token details: ++ +[cols="1,2"] +|=== +|Field |Value + +|Name +|`cline-access` + +|Scopes +|`ai-gateway:read`, `ai-gateway:write` + +|Expiration +|Set appropriate expiration based on security policies +|=== + +. Click *Create* +. Copy the token (it appears only once) + +Distribute this token to Cline users through secure channels. + +=== Token rotation + +Implement token rotation for security: + +. Create a new token before the existing token expires +. Distribute the new token to users +. Monitor usage of the old token in (observability dashboard) +. Revoke the old token after all users have migrated + +== Configure Cline clients + +Provide these instructions to users configuring Cline in VS Code. + +=== API provider configuration + +Users configure Cline's API provider and credentials through the Cline extension interface. + +IMPORTANT: API provider configuration (API keys, base URLs, custom headers) is managed via Cline's extension global state, not VS Code `settings.json`. These settings are stored in the extension's internal state and must be configured through the Cline UI. + +==== Configure via Cline UI + +. Open the Cline extension panel in VS Code +. Click the settings icon or gear menu +. Configure the API connection: ++ +* *API Provider*: Select "Custom" or "Anthropic" +* *API Base URL*: The gateway endpoint URL from the gateway details page +* *API Key*: The API token generated earlier + +Replace: + +* `YOUR_API_TOKEN`: The API token generated earlier + +=== MCP server configuration + +Configure Cline to connect to the aggregated MCP endpoint through the Cline UI or by editing `cline_mcp_settings.json`. + +==== Enable MCP mode + +. Open VS Code Settings (Cmd/Ctrl + ,) +. Search for "Cline > Mcp: Mode" +. Enable the MCP mode toggle + +==== Configure MCP server via Cline UI + +. Open the Cline extension panel in VS Code +. Navigate to MCP server settings +. Add the Redpanda AI Gateway MCP server with the connection details + +==== Configure via cline_mcp_settings.json + +Alternatively, edit `cline_mcp_settings.json` (located in the Cline extension storage directory): + +[source,json] +---- +{ + "mcpServers": { + "redpanda-ai-gateway": { + "type": "streamableHttp", + "url": "GATEWAY_MCP_ENDPOINT_URL", + "headers": { + "Authorization": "Bearer YOUR_API_TOKEN" + } + } + } +} +---- + +Replace: + +* `GATEWAY_MCP_ENDPOINT_URL`: The gateway MCP endpoint URL from the gateway details page +* `YOUR_API_TOKEN`: The API token generated earlier + +This configuration connects Cline to the aggregated MCP endpoint with authentication. + +=== Configuration scope + +Cline stores configuration in the extension's global state: + +* *API Provider settings*: Stored globally per VS Code instance, applies to all workspaces +* *MCP server settings*: Can be configured per workspace using `cline_mcp_settings.json` + +For project-specific MCP server configurations (for example, development vs production gateways), place `cline_mcp_settings.json` in the workspace directory and configure different MCP servers per project. + +== Monitor Cline usage + +Track Cline activity through gateway observability features. + +=== View request logs + +. Navigate to *AI Gateway* > *Observability* > *Logs* +. Filter by gateway ID: `cline-gateway` +. Review: ++ +* Request timestamps and duration +* Model used per request +* Token usage (prompt and completion tokens) +* Estimated cost per request +* HTTP status codes and errors + +Cline autonomous operations may generate request sequences. Look for patterns to identify long-running sessions. + +=== Analyze metrics + +. Navigate to *AI Gateway* > *Observability* > *Metrics* +. Select the Cline gateway +. Review: ++ +[cols="1,2"] +|=== +|Metric |Purpose + +|Request volume +|Identify autonomous session patterns and peak times + +|Token usage +|Track consumption trends from multi-step operations + +|Estimated spend +|Monitor costs against budget (autonomous operations can be expensive) + +|Latency (p50, p95, p99) +|Detect performance issues in autonomous workflows + +|Error rate +|Identify failing requests or misconfigured clients +|=== + + +=== Query logs via API + +Programmatically access logs for integration with monitoring systems: + +[source,bash] +---- +# Set REDPANDA_API_TOKEN environment variable before running +curl https://{CLUSTER_ID}.cloud.redpanda.com/api/ai-gateway/logs \ + -H "Authorization: Bearer ${REDPANDA_API_TOKEN}" \ + -H "Content-Type: application/json" \ + -d '{ + "gateway_id": "GATEWAY_ID", + "start_time": "2026-01-01T00:00:00Z", + "end_time": "2026-01-14T23:59:59Z", + "limit": 100 + }' +---- + +NOTE: Set the `REDPANDA_API_TOKEN` environment variable to your API token before running this command. + +== Security considerations + +Apply these security best practices for Cline deployments. + +=== Limit token scope + +Create tokens with minimal required scopes: + +* `ai-gateway:read`: Required for MCP tool discovery +* `ai-gateway:write`: Required for LLM requests and tool execution + +Avoid granting broader scopes like `admin` or `cluster:write`. + +Because Cline performs autonomous operations, limit what tools it can access through MCP server selection. + +=== Implement network restrictions + +If Cline clients connect from known networks (corporate VPN, office IP ranges), configure network policies: + +. Use cloud provider security groups to restrict access to AI Gateway endpoints +. Allowlist only the IP ranges where Cline clients operate +. Monitor for unauthorized access attempts in request logs + +=== Enforce token expiration + +Set short token lifetimes for high-security environments: + +* Development environments: 90 days +* Production environments: 30 days + +Automate token rotation to reduce manual overhead. + +=== Audit tool access + +Review which MCP tools Cline clients can access: + +. Periodically audit the MCP servers configured in the gateway +. Remove unused or deprecated MCP servers +. Monitor tool execution logs for unexpected autonomous behavior +. Consider creating separate gateways for different trust levels + +Because Cline operates autonomously, carefully control which tools it can invoke. + +=== Monitor autonomous operations + +Set up alerts for unusual patterns: + +* Request rate spikes (may indicate runaway autonomous loops) +* High error rates (may indicate tool compatibility issues) +* Unexpected tool invocations (may indicate misconfigured autonomous behavior) +* Budget consumption spikes (autonomous operations can be expensive) + +== Troubleshooting + +Common issues and solutions when configuring AI Gateway for Cline. + +=== Cline cannot connect to gateway + +Symptom: Connection errors when Cline tries to discover tools or send LLM requests. + +Causes and solutions: + +* **Invalid gateway ID**: Verify the gateway endpoint URL matches the URL from the gateway details page in the console +* **Expired token**: Generate a new API token and update the Cline settings +* **Network connectivity**: Verify the cluster endpoint is accessible from the client network +* **Provider not enabled**: Ensure at least one LLM provider is enabled and has models in the catalog +* **VS Code settings not applied**: Reload VS Code window after changing settings (Cmd/Ctrl + Shift + P > "Reload Window") + +=== Tools not appearing in Cline + +Symptom: Cline does not discover MCP tools. + +Causes and solutions: + +* **MCP servers not configured**: Add MCP server endpoints in the gateway's MCP tab +* **Deferred loading enabled but search failing**: Check that the search tool is correctly configured +* **MCP server authentication failing**: Verify MCP server authentication credentials in the gateway configuration +* **Cline MCP configuration missing**: Ensure `cline.mcpServers` is configured in settings + +=== High costs or token usage + +Symptom: Token usage and costs exceed expectations. + +Causes and solutions: + +* **Deferred tool loading disabled**: Enable deferred tool loading to reduce tokens by 80-90% +* **Autonomous loops**: Monitor for repeated similar requests (may indicate autonomous operation stuck in a loop) +* **No rate limits**: Apply per-minute rate limits to prevent runaway autonomous usage +* **Missing spending limits**: Set monthly budget limits with blocking enforcement +* **Expensive models for autonomous work**: Route autonomous operations to cost-effective models (for example, Claude Sonnet instead of Opus) +* **Too many tools in context**: Reduce the number of aggregated MCP servers or enable deferred loading + +=== Requests failing with 429 errors + +Symptom: Cline receives HTTP 429 Too Many Requests errors. + +Causes and solutions: + +* **Rate limit exceeded**: Review and increase rate limits if autonomous usage is legitimate +* **Upstream provider rate limits**: Check if the upstream LLM provider is rate-limiting; configure failover pools +* **Budget exhausted**: Verify monthly spending limit has not been reached +* **Autonomous operation too aggressive**: Configure Cline to slow down request rate + +=== Autonomous operations timing out + +Symptom: Cline operations fail with timeout errors. + +Causes and solutions: + +* **MCP orchestrator timeout too short**: Increase orchestrator execution timeout to 60 seconds +* **Complex multi-step operations**: Break down tasks or use the orchestrator tool for better efficiency +* **Slow MCP server responses**: Check MCP server performance and consider caching + +== Next steps + +* xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[]: Implement advanced routing rules +* xref:ai-agents:mcp/remote/overview.adoc[]: Deploy Remote MCP servers for custom tools diff --git a/modules/ai-agents/partials/integrations/cline-user.adoc b/modules/ai-agents/partials/integrations/cline-user.adoc new file mode 100644 index 000000000..b03e31016 --- /dev/null +++ b/modules/ai-agents/partials/integrations/cline-user.adoc @@ -0,0 +1,734 @@ += Configure Cline with AI Gateway +:description: Configure Cline to use Redpanda AI Gateway for unified LLM access, MCP tool integration, and autonomous coding workflows. +:page-topic-type: how-to +:personas: ai_agent_developer, app_developer +:learning-objective-1: Configure Cline to connect to AI Gateway for LLM requests and MCP tools +:learning-objective-2: Set up autonomous mode with custom instructions and browser integration +:learning-objective-3: Verify Cline routes requests through the gateway and optimize for cost + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +After xref:ai-agents:ai-gateway/gateway-quickstart.adoc[configuring your AI Gateway], set up Cline (formerly Claude Dev) to route LLM requests and access MCP tools through the gateway's unified endpoints. + +After reading this page, you will be able to: + +* [ ] Configure Cline to connect to AI Gateway for LLM requests and MCP tools. +* [ ] Set up autonomous mode with custom instructions and browser integration. +* [ ] Verify Cline routes requests through the gateway and optimize for cost. + +== Prerequisites + +Before configuring Cline, ensure you have: + +* Cline VS Code extension installed (search for "Cline" in VS Code Extensions) +* An active Redpanda AI Gateway with: +** At least one LLM provider enabled (see xref:ai-agents:ai-gateway/gateway-quickstart.adoc#step-1-enable-a-provider[Enable a provider]) +** A gateway created and configured (see xref:ai-agents:ai-gateway/gateway-quickstart.adoc#step-3-create-a-gateway[Create a gateway]) +* Your AI Gateway credentials: +** Gateway endpoint URL, which includes the gateway ID (for example, `\https://ai.prd.cloud.redpanda.com/gateway/v1/chat/completions`) +** API key with access to the gateway + +== About Cline + +Cline is an autonomous AI coding agent for VS Code that can: + +* Read and edit files in your workspace +* Execute terminal commands +* Browse the web for documentation and research +* Create and manage complex multi-file changes +* Work autonomously with approval checkpoints + +By routing Cline through AI Gateway, you gain centralized observability, cost controls, and the ability to aggregate multiple MCP servers into a single interface. + +== Configuration overview + +Cline supports two connection types for AI Gateway: + +[cols="1,2,2"] +|=== +|Connection type |Use for |Configuration location + +|OpenAI-compatible API +|LLM requests (chat, code generation) +|Cline Settings → API Configuration + +|MCP servers +|Tool discovery and execution +|Cline Settings → MCP Servers +|=== + +Both can route through AI Gateway independently or together, depending on your needs. + +== Configure LLM routing through gateway + +Set up Cline to route all LLM requests through your AI Gateway instead of directly to providers. + +=== Open Cline settings + +. Open VS Code +. Open Command Palette (Cmd+Shift+P or Ctrl+Shift+P) +. Search for `Cline: Open Settings` +. Select `Cline: Open Settings` + +Alternatively, click the gear icon in the Cline sidebar panel. + +=== Configure API provider + +In the Cline settings interface: + +. Navigate to *API Configuration* section +. Select *API Provider*: `OpenAI Compatible` +. Set *Base URL*: Your gateway endpoint URL (for example, `\https://ai.prd.cloud.redpanda.com/gateway/v1/chat/completions`). The gateway ID is embedded in the URL path. +. Set *API Key*: Your Redpanda API key + +=== Select model + +In the *Model* dropdown, enter the model using the `vendor/model_id` format: + +* For Anthropic Claude: `anthropic/claude-sonnet-4.5` +* For OpenAI: `openai/gpt-5.2` +* For other providers: `{provider}/{model-name}` + +The gateway routes the request based on this format. If you use a non-prefixed model name (for example, `claude-sonnet-4.5`), the gateway may not route correctly. + +=== Verify configuration + +. Click *Test Connection* in Cline settings +. Verify status shows "Connected" +. Send a test message in the Cline chat panel + +If the connection fails, see <>. + +== Configure MCP server integration + +Connect Cline to your AI Gateway's MCP endpoint to aggregate tools from multiple MCP servers. + +=== Add MCP server connection + +In the Cline settings interface: + +. Navigate to *MCP Servers* section +. Click *Add MCP Server* +. Configure the connection: ++ +[,json] +---- +{ + "name": "redpanda-ai-gateway", + "transport": "http", + "url": "/mcp", + "headers": { + "Authorization": "Bearer YOUR_API_KEY" + } +} +---- + +Replace placeholder values: + +* `` - Your gateway endpoint URL (the gateway ID is embedded in the URL path) +* `YOUR_API_KEY` - Your Redpanda API key + +=== Enable tool discovery + +After adding the MCP server: + +. Click *Refresh Tools* to discover available tools +. Verify that tools from your configured MCP servers appear in the tool list +. If using deferred tool loading, you'll see a search tool and MCP orchestrator tool instead of all tools upfront + +Tools are now available for Cline to use autonomously during coding sessions. + +=== Alternative: Manual configuration file + +For more control, edit the VS Code settings directly: + +. Open VS Code settings (Cmd+, or Ctrl+,) +. Search for `cline.mcpServers` +. Click *Edit in settings.json* +. Add the MCP server configuration: ++ +[,json] +---- +{ + "cline.mcpServers": [ + { + "name": "redpanda-ai-gateway", + "transport": "http", + "url": "/mcp", + "headers": { + "Authorization": "Bearer YOUR_API_KEY" + } + } + ] +} +---- + +Restart VS Code for changes to take effect. + +== Configure autonomous mode settings + +Optimize Cline's autonomous behavior when using AI Gateway. + +=== Set approval mode + +Control how often Cline requires your approval during autonomous tasks: + +[cols="1,2,2"] +|=== +|Mode |Behavior |Best for + +|*Always ask* +|Request approval for every action +|Testing, sensitive codebases, cost control + +|*Ask before terminal commands* +|Auto-approve file edits, ask for commands +|Trusted environments, faster iteration + +|*Autonomous* +|Complete tasks without interruption +|Well-scoped tasks, batch processing +|=== + +To set approval mode: + +. Open Cline settings +. Navigate to *Autonomous Mode* +. Select your preferred mode + +When using AI Gateway with spend limits, autonomous mode is safer because the gateway enforces budget controls even if Cline makes many requests. + +=== Configure custom instructions + +Add custom instructions to guide Cline's behavior and reduce token costs: + +. Open Cline settings +. Navigate to *Custom Instructions* +. Add instructions that reduce unnecessary requests: ++ +[,text] +---- +- Before making changes, analyze the codebase structure first +- Use existing code patterns instead of creating new ones +- Ask for clarification before large refactors +- Prefer small, incremental changes over complete rewrites +- Use MCP tools for research instead of multiple LLM calls +---- + +These instructions help Cline work more efficiently and reduce token usage. + +=== Enable browser integration + +Cline can use a browser to research documentation, which reduces the need for large context windows: + +. Open Cline settings +. Navigate to *Browser Integration* +. Enable *Allow Browser Access* +. Configure browser mode: +** *Headless* - Faster, lower resource usage +** *Visible* - See what Cline is browsing (useful for debugging) + +Browser integration is particularly useful with AI Gateway because: + +* Cline can look up current documentation instead of relying on outdated training data +* Reduces prompt token costs from pasting documentation into context +* Works with MCP tools that fetch web content + +== Verify configuration + +After configuring Cline, verify it connects correctly to your AI Gateway. + +=== Test LLM routing + +Send a test message in the Cline chat panel: + +. Open the Cline sidebar in VS Code +. Type a simple request: "Explain this file" (with a file open) +. Wait for response + +Then verify in the AI Gateway dashboard: + +. Open the Redpanda Cloud Console +. Navigate to your gateway's observability dashboard +. Filter by gateway ID +. Verify: +** Request appears in logs +** Model shows correct format (for example, `anthropic/claude-sonnet-4.5`) +** Token usage and cost are recorded + +If the request doesn't appear, see <>. + +=== Test MCP tool usage + +Verify Cline can discover and invoke MCP tools: + +. In the Cline chat, request a task that requires a tool +. For example: "Use the weather tool to check the forecast" +. Cline should: +** Discover the tool from the MCP server +** Invoke it with correct parameters +** Return the result + +Check the gateway dashboard for MCP tool invocation logs. + +=== Monitor token costs + +Track Cline's token usage to identify optimization opportunities: + +. Open the AI Gateway observability dashboard +. Filter by your gateway +. View metrics: +** Requests per hour +** Token usage per request (prompt + completion) +** Estimated cost per request + +High token costs may indicate: + +* Context windows that are too large (Cline includes many files unnecessarily) +* Repeated requests for the same information (use custom instructions to prevent this) +* Missing MCP tools that could replace multi-turn conversations + +== Advanced configuration + +=== Model selection strategies + +Different models have different cost and performance characteristics. Configure Cline to use the right model for each task: + +==== Strategy 1: Single high-quality model + +Use one premium model for all tasks. + +Configuration: + +* Model: `anthropic/claude-sonnet-4.5` +* Best for: Complex codebases, high-quality output requirements +* Cost: Higher, but consistent + +==== Strategy 2: Task-based model switching + +Use the gateway's CEL routing to automatically select models based on task complexity. + +Gateway configuration (set in AI Gateway UI): + +[,cel] +---- +// Route simple edits to cost-effective model +request.messages[0].content.contains("fix typo") || +request.messages[0].content.contains("rename") ? + "anthropic/claude-haiku" : + "anthropic/claude-sonnet-4.5" +---- + +This approach requires no changes to Cline configuration. The gateway makes routing decisions transparently. + +==== Strategy 3: Multiple Cline profiles + +Create separate VS Code workspace settings for different projects: + +.Project A (high complexity) +[,json] +---- +{ + "cline.apiProvider": "OpenAI Compatible", + "cline.baseURL": "", + "cline.model": "anthropic/claude-opus-4.6-5" +} +---- + +.Project B (simple tasks) +[,json] +---- +{ + "cline.apiProvider": "OpenAI Compatible", + "cline.baseURL": "", + "cline.model": "anthropic/claude-haiku" +} +---- + +=== Request timeout configuration + +For long-running tool executions or complex code generation: + +. Open VS Code settings +. Search for `cline.requestTimeout` +. Set timeout in milliseconds (default: 60000) ++ +[,json] +---- +{ + "cline.requestTimeout": 120000 +} +---- + +Increase this value if Cline times out during large refactoring tasks or when using slow MCP tools. + +=== Debug mode + +Enable debug logging to troubleshoot connection issues: + +. Open VS Code settings +. Search for `cline.debug` +. Enable debug mode: ++ +[,json] +---- +{ + "cline.debug": true +} +---- + +Debug logs appear in the VS Code Output panel: + +. Open Output panel (View → Output) +. Select "Cline" from the dropdown +. View HTTP request and response details + +Debug mode shows: + +* Full request and response payloads +* Gateway routing headers +* MCP tool discovery messages +* Error details + +=== Environment-based configuration + +Use different gateways for different environments without changing settings manually. + +IMPORTANT: VS Code's `.vscode/settings.json` does not natively support environment variable substitution with the `${VAR}` syntax shown below. You must either install an extension that provides variable substitution, replace the placeholders manually with actual values, or set environment variables before launching VS Code. + +Create workspace-specific configurations: + +.Development workspace (.vscode/settings.json) +[,json] +---- +{ + "cline.apiProvider": "OpenAI Compatible", + "cline.baseURL": "${GATEWAY_DEV_URL}" +} +---- + +.Production workspace (.vscode/settings.json) +[,json] +---- +{ + "cline.apiProvider": "OpenAI Compatible", + "cline.baseURL": "${GATEWAY_PROD_URL}" +} +---- + +Set environment variables before launching VS Code: + +[,bash] +---- +export GATEWAY_DEV_URL="" +export GATEWAY_PROD_URL="" +---- + +On Windows (PowerShell): + +[,powershell] +---- +$env:GATEWAY_DEV_URL = "" +$env:GATEWAY_PROD_URL = "" +---- + +[[troubleshooting]] +== Troubleshooting + +=== Cline shows "Connection failed" + +**Symptom**: Cline settings show connection failed, or requests return errors. + +**Causes and solutions**: + +. **Incorrect base URL** ++ +Verify your base URL does NOT include `/v1` or `/chat/completions`: ++ +[,text] +---- +# Correct + + +# Incorrect +/v1 +/chat/completions +---- ++ +Cline appends the correct path automatically. + +. **Authentication failure** ++ +Verify your API key is valid: ++ +[,bash] +---- +curl -H "Authorization: Bearer YOUR_API_KEY" \ + /v1/models +---- ++ +You should receive a list of available models. If you get `401 Unauthorized`, regenerate your API key in the Redpanda Cloud Console. + +. **Gateway endpoint URL mismatch** ++ +Check that the gateway endpoint URL in your Cline configuration matches your gateway exactly. Copy it directly from the AI Gateway UI. + +. **Network connectivity issues** ++ +Test basic connectivity: ++ +[,bash] +---- +curl -I +---- ++ +If this times out, check your network configuration, firewall rules, or VPN connection. + +=== MCP tools not appearing + +**Symptom**: Cline doesn't see tools from the MCP server, or tool discovery fails. + +**Causes and solutions**: + +. **MCP endpoint incorrect** ++ +Verify the MCP endpoint is correct. It should be `{gateway-url}/mcp`, not just `{gateway-url}`: ++ +[,text] +---- +# Correct +/mcp + +# Incorrect + +---- + +. **No MCP servers configured in gateway** ++ +Verify your gateway has at least one MCP server enabled in the AI Gateway UI. + +. **Deferred tool loading enabled** ++ +If deferred tool loading is enabled, you'll see only a search tool initially. This is expected behavior. Tools load on-demand when Cline needs them. + +. **MCP server unreachable** ++ +Test the MCP endpoint directly: ++ +[,bash] +---- +curl -H "Authorization: Bearer YOUR_API_KEY" \ + /mcp +---- ++ +You should receive a valid MCP protocol response listing available tools. + +=== Requests not appearing in gateway dashboard + +**Symptom**: Cline works, but you don't see requests in the AI Gateway observability dashboard. + +**Causes and solutions**: + +. **Wrong gateway configured** ++ +Verify that the gateway endpoint URL in your Cline configuration matches the gateway you're viewing in the dashboard. + +. **Using direct provider connection** ++ +If you configured Cline with a provider's API directly (not the gateway URL), requests won't route through the gateway. Verify the base URL is your gateway endpoint. + +. **Log ingestion delay** ++ +Gateway logs can take 5-10 seconds to appear in the dashboard. Wait briefly and refresh. + +. **Model name format error** ++ +Ensure requests use the `vendor/model_id` format (for example, `anthropic/claude-sonnet-4.5`), not just the model name (for example, `claude-sonnet-4.5`). Check the model field in Cline settings. + +=== High token costs + +**Symptom**: Cline uses more tokens than expected, resulting in high costs. + +**Causes and solutions**: + +. **Large context windows** ++ +Cline may be including too many files in the context. Solutions: ++ +* Use custom instructions to limit file inclusion +* Create a `.clineignore` file to exclude unnecessary files +* Break large tasks into smaller, focused subtasks + +. **Repeated requests** ++ +Cline may be making redundant requests for the same information. Solutions: ++ +* Add custom instructions to prevent repeated analysis +* Use MCP tools to fetch external information instead of asking the LLM +* Enable caching in the gateway (if available) + +. **Wrong model selected** ++ +You may be using a premium model for simple tasks. Solutions: ++ +* Switch to a cost-effective model (for example, `anthropic/claude-haiku`) ++ +* Use gateway CEL routing to automatically select models based on task complexity + +. **MCP tool overhead** ++ +If not using deferred tool loading, all tools load with every request. Solution: ++ +* Enable deferred tool loading in your AI Gateway configuration (see xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[]) + +=== Cline hangs or times out + +**Symptom**: Cline stops responding or shows timeout errors. + +**Causes and solutions**: + +. **Request timeout too low** ++ +Increase the timeout in VS Code settings: ++ +[,json] +---- +{ + "cline.requestTimeout": 120000 +} +---- + +. **Long-running MCP tool** ++ +Some MCP tools take time to execute. Check the gateway observability dashboard to see if tool execution is slow. + +. **Gateway rate limiting** ++ +You may be hitting rate limits. Check the dashboard for rate limit metrics and increase limits if needed. + +. **Provider outage** ++ +Check the AI Gateway dashboard for provider status. If the primary provider is down, configure failover (see xref:ai-agents:ai-gateway/gateway-quickstart.adoc#configure-provider-pool-with-fallback[Configure failover]). + +=== Settings changes not taking effect + +**Symptom**: Changes to Cline settings or VS Code configuration don't apply. + +**Solutions**: + +. **Reload VS Code** ++ +Some settings require reloading: ++ +* Open Command Palette (Cmd+Shift+P or Ctrl+Shift+P) +* Search for `Developer: Reload Window` +* Select and confirm + +. **Workspace settings override** ++ +Check if workspace settings (`.vscode/settings.json`) override user settings. Workspace settings take precedence. + +. **Invalid JSON syntax** ++ +If editing `settings.json` manually, validate JSON syntax. VS Code shows syntax errors in the editor. + +== Cost optimization tips + +=== Use the right model for each task + +Match model selection to task complexity: + +[cols="1,2,1"] +|=== +|Task type |Recommended model |Reason + +|Simple edits (typos, renames) +|`anthropic/claude-haiku` +|Low cost, fast + +|Code review, analysis +|`anthropic/claude-sonnet-4.5` +|Balanced quality and cost + +|Complex refactors, architecture +|`anthropic/claude-sonnet-4.5` or `anthropic/claude-opus-4.6-5` +|High quality for critical work +|=== + +Configure CEL routing in the gateway to automate model selection. + +=== Reduce context window size + +Limit the number of files Cline includes in requests: + +. Create a `.clineignore` file in your workspace root: ++ +[,text] +---- +# Exclude build artifacts +dist/ +build/ +node_modules/ + +# Exclude test files when not testing +**/*.test.js +**/*.spec.ts + +# Exclude documentation +docs/ +*.md +---- + +. Use custom instructions to guide file selection: ++ +[,text] +---- +- Only include files directly related to the task +- Ask which files to include if unsure +- Exclude test files unless specifically working on tests +---- + +=== Use MCP tools instead of large prompts + +Replace long documentation pastes with MCP tools: + +Before (high token cost): + +* User pastes API documentation into Cline chat +* Cline uses documentation to write integration code +* Thousands of tokens used for documentation + +After (low token cost): + +* Configure an MCP tool that searches API documentation +* Cline queries the tool for specific information as needed +* Only relevant sections included in context + +See xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[] for MCP tool configuration. + +=== Enable deferred tool loading + +If using multiple MCP servers, enable deferred tool loading in your gateway configuration to reduce token costs by 80-90%. + +This loads only essential tools initially. Cline queries for additional tools on-demand. + +=== Monitor and set spend limits + +Use AI Gateway spend limits to prevent runaway costs: + +. Navigate to your gateway in the Redpanda Cloud Console +. Set monthly spend limit (for example, $500/month) +. Configure alerts before reaching limit + +The gateway automatically blocks requests that would exceed the limit. + +== Next steps + +* xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[]: Configure deferred tool loading to reduce token costs +* xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[]: Use CEL expressions to route Cline requests based on task complexity + +== Related pages + +* xref:ai-agents:ai-gateway/gateway-quickstart.adoc[]: Create and configure your AI Gateway +* xref:ai-agents:ai-gateway/gateway-architecture.adoc[]: Learn about AI Gateway architecture and benefits +* xref:ai-agents:ai-gateway/integrations/claude-code-user.adoc[]: Configure Claude Code with AI Gateway diff --git a/modules/ai-agents/partials/integrations/continue-admin.adoc b/modules/ai-agents/partials/integrations/continue-admin.adoc new file mode 100644 index 000000000..42139cdd2 --- /dev/null +++ b/modules/ai-agents/partials/integrations/continue-admin.adoc @@ -0,0 +1,741 @@ += Configure AI Gateway for Continue.dev +:description: Configure Redpanda AI Gateway to support Continue.dev clients. +:page-topic-type: how-to +:personas: platform_admin +:learning-objective-1: Configure AI Gateway endpoints for Continue.dev connectivity +:learning-objective-2: Set up multi-provider backends with native format routing +:learning-objective-3: Deploy MCP tool aggregation for Continue.dev tool discovery + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +Configure Redpanda AI Gateway to support Continue.dev clients accessing multiple LLM providers and MCP tools through flexible, native-format endpoints. + +After reading this page, you will be able to: + +* [ ] Configure AI Gateway endpoints for Continue.dev connectivity. +* [ ] Set up multi-provider backends with native format routing. +* [ ] Deploy MCP tool aggregation for Continue.dev tool discovery. + +== Prerequisites + +* AI Gateway deployed on a BYOC cluster running Redpanda version 25.3 or later +* Administrator access to the AI Gateway UI +* API keys for at least one LLM provider (Anthropic, OpenAI, or others) +* Understanding of xref:ai-agents:ai-gateway/gateway-architecture.adoc[AI Gateway concepts] + +== About Continue.dev + +Continue.dev is a highly configurable open-source AI coding assistant that integrates with VS Code and JetBrains IDEs. Unlike other AI assistants, Continue.dev uses native provider API formats rather than requiring transforms to a unified format. This architectural choice provides maximum flexibility but requires specific gateway configuration. + +Key characteristics: + +* Uses native provider formats (Anthropic format for Anthropic, OpenAI format for OpenAI) +* Supports multiple LLM providers simultaneously with per-provider configuration +* Custom API endpoints via `apiBase` configuration +* Custom headers via `requestOptions.headers` +* Built-in MCP support for tool discovery and execution +* Autocomplete, chat, and inline edit modes + +== Architecture overview + +Continue.dev connects to AI Gateway differently than unified-format clients: + +* Each provider requires a separate backend configured without format transforms +* LLM endpoint: `https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1/{provider}` (provider-specific paths) +* MCP endpoint: `https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/mcp` for tool discovery and execution + +The gateway handles: + +. Authentication via bearer tokens in the `Authorization` header +. Provider-specific request formats without transformation +. Model routing using provider-native model identifiers +. MCP server aggregation for multi-tool workflows +. Request logging and cost tracking per gateway + +== Enable LLM providers + +Continue.dev works with multiple providers. Enable the providers your users will access. + +=== Configure Anthropic + +To enable Anthropic with native format support: + +. Navigate to *AI Gateway* > *Providers* in the Redpanda Cloud console +. Select *Anthropic* from the provider list +. Click *Add configuration* +. Enter your Anthropic API key +. Under *Format*, select *Native Anthropic* (not OpenAI-compatible) +. Click *Save* + +The gateway now accepts Anthropic's native `/v1/messages` format. + +=== Configure OpenAI + +To enable OpenAI: + +. Navigate to *AI Gateway* > *Providers* +. Select *OpenAI* from the provider list +. Click *Add configuration* +. Enter your OpenAI API key +. Under *Format*, select *Native OpenAI* +. Click *Save* + +=== Configure additional providers + +Continue.dev supports many providers. For each provider: + +. Add the provider configuration in the gateway +. Ensure the format is set to the provider's native format +. Do not enable format transforms (Continue.dev handles format differences in its client code) + +Common additional providers: + +* Google Gemini (native Google format) +* Mistral AI (OpenAI-compatible format) +* Together AI (OpenAI-compatible format) +* Ollama (OpenAI-compatible format for local models) + +=== Enable models in the catalog + +After enabling providers, enable specific models: + +. Navigate to *AI Gateway* > *Models* +. Enable the models you want Continue.dev clients to access ++ +Common models for Continue.dev: ++ +* `claude-opus-4.6` (Anthropic, high quality) +* `claude-sonnet-4.5` (Anthropic, balanced) +* `gpt-5.2` (OpenAI, high quality) +* `gpt-5.2-mini` (OpenAI, fast autocomplete) +* `o1-mini` (OpenAI, reasoning) + +. Click *Save* + +Continue.dev uses provider-native model identifiers (for example, `claude-sonnet-4.5` not `anthropic/claude-sonnet-4.5`). + +== Create a gateway for Continue.dev clients + +Create a dedicated gateway to isolate Continue.dev traffic and apply specific policies. + +=== Gateway configuration + +. Navigate to *AI Gateway* > *Gateways* +. Click *Create Gateway* +. Enter gateway details: ++ +[cols="1,2"] +|=== +|Field |Value + +|Name +|`continue-gateway` (or your preferred name) + +|Workspace +|Select the workspace for access control grouping + +|Description +|Gateway for Continue.dev IDE clients +|=== + +. Click *Create* +. Copy the gateway endpoint URL from the gateway details page + +=== Configure provider-specific backends + +Continue.dev requires separate backend configurations for each provider because it uses native formats. + +==== Anthropic backend + +. Navigate to the gateway's *Backends* tab +. Click *Add Backend* +. Configure: ++ +[cols="1,2"] +|=== +|Field |Value + +|Backend name +|`anthropic-native` + +|Provider +|Anthropic + +|Format +|Native Anthropic (no transform) + +|Path +|`/v1/anthropic` + +|Enabled models +|All Anthropic models you enabled in the catalog +|=== + +. Click *Save* + +Continue.dev will send requests to `https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1/anthropic` using Anthropic's native format. + +==== OpenAI backend + +. Click *Add Backend* +. Configure: ++ +[cols="1,2"] +|=== +|Field |Value + +|Backend name +|`openai-native` + +|Provider +|OpenAI + +|Format +|Native OpenAI (no transform) + +|Path +|`/v1/openai` + +|Enabled models +|All OpenAI models you enabled in the catalog +|=== + +. Click *Save* + +Continue.dev will send requests to `https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1/openai` using OpenAI's native format. + +==== Additional provider backends + +Repeat the backend configuration process for each provider: + +* Google Gemini: `/v1/google`, native Google format +* Mistral: `/v1/mistral`, OpenAI-compatible format +* Ollama (if proxying local models): `/v1/ollama`, OpenAI-compatible format + +=== Configure LLM routing + +Set up routing policies for Continue.dev requests. + +==== Per-provider routing + +Configure routing rules that apply to each backend: + +. Navigate to the gateway's *Routing* tab +. For each backend, click *Add Route* +. Configure basic routing: ++ +[source,cel] +---- +true # Matches all requests to this backend +---- + +. Add a primary provider configuration with your Anthropic API key +. (Optional) Add a fallback configuration for redundancy if you have multiple API keys +. Click *Save* + +==== Provider failover + +For providers with multiple API keys, configure failover: + +. In the backend's routing configuration, add multiple provider configurations +. Set failover conditions: ++ +* Rate limits (HTTP 429) +* Timeouts (no response within 30 seconds) +* 5xx errors (provider unavailable) + +. Configure load balancing: Round robin across available keys +. Click *Save* + +Continue.dev requests automatically fail over to healthy API keys when the primary key experiences issues. + +=== Apply rate limits + +Prevent runaway usage from Continue.dev clients: + +. Navigate to the gateway's *Rate Limits* tab +. Configure global limits: ++ +[cols="1,2"] +|=== +|Setting |Recommended Value + +|Global rate limit +|200 requests per minute (Continue.dev autocomplete can generate many requests) + +|Per-user rate limit +|20 requests per minute (if using user identification headers) + +|Per-backend limits +|Vary by provider (autocomplete backends need higher limits) +|=== + +. Click *Save* + +The gateway blocks requests exceeding these limits and returns HTTP 429 errors. + +==== Rate limit considerations for autocomplete + +Continue.dev's autocomplete feature generates frequent, short requests. Configure higher rate limits for autocomplete-specific backends: + +* Autocomplete models (for example, `gpt-5.2-mini`): 100 requests per minute per user +* Chat models (for example, `claude-sonnet-4.5`): 20 requests per minute per user + +=== Set spending limits + +Control LLM costs across all providers: + +. Navigate to the gateway's *Spend Limits* tab +. Configure: ++ +[cols="1,2"] +|=== +|Setting |Value + +|Monthly budget +|$10,000 (adjust based on expected usage) + +|Enforcement +|Block requests after budget exceeded + +|Alert threshold +|80% of budget (sends notification) +|=== + +. Click *Save* + +The gateway tracks estimated costs per request across all providers and blocks traffic when the monthly budget is exhausted. + +== Configure MCP tool aggregation + +Enable Continue.dev to discover and use tools from multiple MCP servers through a single endpoint. + +=== Add MCP servers + +. Navigate to the gateway's *MCP* tab +. Click *Add MCP Server* +. Enter server details: ++ +[cols="1,2"] +|=== +|Field |Value + +|Display name +|Descriptive name (for example, `redpanda-data-catalog`, `code-search-tools`) + +|Endpoint URL +|MCP server endpoint (for example, xref:ai-agents:mcp/remote/overview.adoc[Remote MCP server] URL) + +|Authentication +|Bearer token or other authentication mechanism +|=== + +. Click *Save* + +Repeat for each MCP server you want to aggregate. + +=== Enable deferred tool loading + +Reduce token costs for Continue.dev sessions with many available tools: + +. Under *MCP Settings*, enable *Deferred tool loading* +. Click *Save* + +When enabled: + +* Continue.dev initially receives only a search tool and orchestrator tool +* Continue.dev queries for specific tools by name when needed +* Token usage decreases by 80-90% for configurations with many tools + +This is particularly important for Continue.dev because autocomplete and chat modes both use tool discovery. + +=== Add the MCP orchestrator + +The MCP orchestrator reduces multi-step workflows to single calls: + +. Under *MCP Settings*, enable *MCP Orchestrator* +. Configure: ++ +[cols="1,2"] +|=== +|Setting |Value + +|Orchestrator model +|Select a model with strong code generation capabilities (for example, `claude-sonnet-4.5`) + +|Execution timeout +|30 seconds + +|Backend +|Select the Anthropic backend (orchestrator works best with Claude models) +|=== + +. Click *Save* + +Continue.dev can now invoke the orchestrator tool to execute complex, multi-step operations in a single request. + +== Configure authentication + +Continue.dev clients authenticate using bearer tokens. + +=== Generate API tokens + +. Navigate to *Security* > *API Tokens* in the Redpanda Cloud console +. Click *Create Token* +. Enter token details: ++ +[cols="1,2"] +|=== +|Field |Value + +|Name +|`continue-access` + +|Scopes +|`ai-gateway:read`, `ai-gateway:write` + +|Expiration +|Set appropriate expiration based on security policies +|=== + +. Click *Create* +. Copy the token (it appears only once) + +Distribute this token to Continue.dev users through secure channels. + +=== Token rotation + +Implement token rotation for security: + +. Create a new token before the existing token expires +. Distribute the new token to users +. Monitor usage of the old token in (observability dashboard) +. Revoke the old token after all users have migrated + +== Configure Continue.dev clients + +Provide these instructions to users configuring Continue.dev in their IDE. + +=== Configuration file location + +Continue.dev supports both JSON and YAML configuration formats. This guide uses YAML (`config.yaml`) because it supports MCP server configuration and environment variable interpolation: + +* VS Code: `~/.continue/config.yaml` +* JetBrains: `~/.continue/config.yaml` + +NOTE: While `config.json` is still supported for basic LLM configuration, `config.yaml` is required for MCP server integration. + +=== Multi-provider configuration + +Users configure Continue.dev with separate provider entries for each backend: + +[source,yaml] +---- +models: + - title: Claude Sonnet (Redpanda) + provider: anthropic + model: claude-sonnet-4.5 + apiBase: https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1/anthropic + apiKey: YOUR_API_TOKEN + + - title: GPT-5.2 (Redpanda) + provider: openai + model: gpt-5.2 + apiBase: https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1/openai + apiKey: YOUR_API_TOKEN + + - title: GPT-5.2-mini (Autocomplete) + provider: openai + model: gpt-5.2-mini + apiBase: https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1/openai + apiKey: YOUR_API_TOKEN + +tabAutocompleteModel: + title: GPT-5.2-mini (Autocomplete) + provider: openai + model: gpt-5.2-mini + apiBase: https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1/openai + apiKey: YOUR_API_TOKEN +---- + +Replace: + +* `{CLUSTER_ID}`: Your Redpanda cluster ID +* `YOUR_API_TOKEN`: The API token generated earlier + +=== MCP server configuration + +Configure Continue.dev to connect to the aggregated MCP endpoint. + +==== Recommended: Directory-based configuration + +The preferred method is to create MCP server configuration files in the `~/.continue/mcpServers/` directory: + +. Create the directory: `mkdir -p ~/.continue/mcpServers` +. Create `~/.continue/mcpServers/redpanda-ai-gateway.yaml`: ++ +[source,yaml] +---- +transport: + type: streamable-http + url: https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/mcp + headers: + Authorization: Bearer YOUR_API_TOKEN +---- ++ +IMPORTANT: For production deployments, use environment variable interpolation with `${{ secrets.VARIABLE }}` syntax instead of hardcoding tokens. See xref:ai-agents:ai-gateway/integrations/continue-user.adoc#configure-env-vars[Configure with environment variables] in the user guide for details. + +Continue.dev automatically discovers MCP server configurations in this directory. + +==== Alternative: Inline configuration + +Alternatively, embed MCP server configuration in `~/.continue/config.yaml`: + +[source,yaml] +---- +mcpServers: + - transport: + type: streamable-http + url: https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/mcp + headers: + Authorization: Bearer YOUR_API_TOKEN +---- + +Replace: + +* `{CLUSTER_ID}`: Your Redpanda cluster ID +* `YOUR_API_TOKEN`: The API token generated earlier + +This configuration connects Continue.dev to the aggregated MCP endpoint with authentication headers. + +=== Model selection strategy + +Configure different models for different Continue.dev modes: + +[cols="1,2,1"] +|=== +|Mode |Recommended Model |Reason + +|Chat +|`claude-sonnet-4.5` or `gpt-5.2` +|High quality for complex questions + +|Autocomplete +|`gpt-5.2-mini` +|Fast, cost-effective for frequent requests + +|Inline edit +|`claude-sonnet-4.5` +|Balanced quality and speed for code modifications + +|Embeddings +|`text-embedding-3-small` +|Cost-effective for code search +|=== + +== Monitor Continue.dev usage + +Track Continue.dev activity through gateway observability features. + +=== View request logs + +. Navigate to *AI Gateway* > *Observability* > *Logs* +. Filter by gateway ID: `continue-gateway` +. Review: ++ +* Request timestamps and duration +* Backend and model used per request +* Token usage (prompt and completion tokens) +* Estimated cost per request +* HTTP status codes and errors + +Continue.dev generates different request patterns: + +* Autocomplete: Many short requests with low token counts +* Chat: Longer requests with context and multi-turn conversations +* Inline edit: Medium-length requests with code context + +=== Analyze metrics + +. Navigate to *AI Gateway* > *Observability* > *Metrics* +. Select the Continue.dev gateway +. Review: ++ +[cols="1,2"] +|=== +|Metric |Purpose + +|Request volume by backend +|Identify which providers are most used + +|Token usage by model +|Track consumption patterns (autocomplete vs chat) + +|Estimated spend by backend +|Monitor costs across providers + +|Latency (p50, p95, p99) by backend +|Detect provider-specific performance issues + +|Error rate by backend +|Identify failing providers or misconfigured backends +|=== + + +=== Query logs via API + +Programmatically access logs for integration with monitoring systems: + +[source,bash] +---- +curl https://{CLUSTER_ID}.cloud.redpanda.com/api/ai-gateway/logs \ + -H "Authorization: Bearer YOUR_API_TOKEN" \ + -H "Content-Type: application/json" \ + -d '{ + "gateway_id": "GATEWAY_ID", + "start_time": "2026-01-01T00:00:00Z", + "end_time": "2026-01-14T23:59:59Z", + "limit": 100 + }' +---- + +== Security considerations + +Apply these security best practices for Continue.dev deployments. + +=== Limit token scope + +Create tokens with minimal required scopes: + +* `ai-gateway:read`: Required for MCP tool discovery +* `ai-gateway:write`: Required for LLM requests and tool execution + +Avoid granting broader scopes like `admin` or `cluster:write`. + +=== Implement network restrictions + +If Continue.dev clients connect from known networks, configure network policies: + +. Use cloud provider security groups to restrict access to AI Gateway endpoints +. Allowlist only the IP ranges where Continue.dev clients operate +. Monitor for unauthorized access attempts in request logs + +=== Enforce token expiration + +Set short token lifetimes for high-security environments: + +* Development environments: 90 days +* Production environments: 30 days + +Automate token rotation to reduce manual overhead. + +=== Audit tool access + +Review which MCP tools Continue.dev clients can access: + +. Periodically audit the MCP servers configured in the gateway +. Remove unused or deprecated MCP servers +. Monitor tool execution logs for unexpected behavior + +=== Protect API keys in configuration + +Continue.dev stores the API token in plain text in `config.yaml`. Remind users to: + +* Never commit `config.yaml` to version control +* Use file system permissions to restrict access (for example, `chmod 600 ~/.continue/config.yaml`) +* Rotate tokens if they suspect compromise + +== Troubleshooting + +Common issues and solutions when configuring AI Gateway for Continue.dev. + +=== Continue.dev cannot connect to gateway + +Symptom: Connection errors when Continue.dev tries to discover tools or send LLM requests. + +Causes and solutions: + +* **Invalid gateway ID**: Verify the gateway endpoint URL matches the URL from the console +* **Expired token**: Generate a new API token and update the Continue.dev configuration +* **Wrong backend path**: Verify `apiBase` matches the backend path (for example, `/v1/anthropic` not `/v1`) +* **Network connectivity**: Verify the cluster endpoint is accessible from the client network +* **Provider not enabled**: Ensure at least one backend is configured with models enabled + +=== Model not found errors + +Symptom: Continue.dev shows "model not found" or similar errors. + +Causes and solutions: + +* **Model not enabled in catalog**: Enable the model in the gateway's model catalog +* **Model identifier mismatch**: Use provider-native names (for example, `claude-sonnet-4.5` not `anthropic/claude-sonnet-4.5`) +* **Wrong backend for model**: Verify the model is associated with the correct backend (Anthropic models with Anthropic backend) + +=== Format errors or unexpected responses + +Symptom: Responses are malformed or Continue.dev reports format errors. + +Causes and solutions: + +* **Transform enabled on backend**: Ensure backend format is set to native (no OpenAI-compatible transform for Anthropic) +* **Wrong provider for apiBase**: Verify Continue.dev's `provider` field matches the backend's provider +* **Headers not passed**: Confirm `requestOptions.headers` is correctly configured + +=== Autocomplete not working or slow + +Symptom: Autocomplete suggestions don't appear or are delayed. + +Causes and solutions: + +* **Wrong model for autocomplete**: Use a fast model like `gpt-5.2-mini` in `tabAutocompleteModel` +* **Rate limits too restrictive**: Increase rate limits for autocomplete backend +* **High backend latency**: Check backend metrics and consider provider failover +* **Token exhaustion**: Verify spending limits haven't been reached + +=== Tools not appearing in Continue.dev + +Symptom: Continue.dev does not discover MCP tools. + +Causes and solutions: + +* **MCP configuration missing**: Ensure `mcpServers` is configured +* **MCP servers not configured in gateway**: Add MCP server endpoints in the gateway's MCP tab +* **Deferred loading enabled but search failing**: Check that the search tool is correctly configured +* **MCP server authentication failing**: Verify MCP server authentication credentials in the gateway configuration + +=== High costs or token usage + +Symptom: Token usage and costs exceed expectations. + +Causes and solutions: + +* **Autocomplete using expensive model**: Configure `tabAutocompleteModel` to use `gpt-5.2-mini` instead of larger models +* **Deferred tool loading disabled**: Enable deferred tool loading to reduce tokens by 80-90% +* **No rate limits**: Apply per-minute rate limits to prevent runaway usage +* **Missing spending limits**: Set monthly budget limits with blocking enforcement +* **Chat using wrong model**: Route chat requests to cost-effective models (for example, `claude-sonnet-4.5` instead of `claude-opus-4.6`) + +=== Requests failing with 429 errors + +Symptom: Continue.dev receives HTTP 429 Too Many Requests errors. + +Causes and solutions: + +* **Rate limit exceeded**: Review and increase rate limits if usage is legitimate (autocomplete needs higher limits) +* **Upstream provider rate limits**: Check if the upstream LLM provider is rate-limiting; configure failover to alternate API keys +* **Budget exhausted**: Verify monthly spending limit has not been reached + +=== Different results from different providers + +Symptom: Same prompt produces different results when switching providers. + +This is expected behavior, not a configuration issue: + +* Different models have different capabilities and response styles +* Continue.dev uses native formats, which may include provider-specific parameters +* Users should select the appropriate model for their task (quality vs speed vs cost) + +== Next steps + +* xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[]: Implement advanced routing rules +* xref:ai-agents:mcp/remote/overview.adoc[]: Deploy Remote MCP servers for custom tools diff --git a/modules/ai-agents/partials/integrations/continue-user.adoc b/modules/ai-agents/partials/integrations/continue-user.adoc new file mode 100644 index 000000000..a5ac19222 --- /dev/null +++ b/modules/ai-agents/partials/integrations/continue-user.adoc @@ -0,0 +1,851 @@ += Configure Continue.dev with AI Gateway +:description: Configure Continue.dev to use Redpanda AI Gateway for unified LLM access, MCP tool integration, and AI-assisted coding. +:page-topic-type: how-to +:personas: ai_agent_developer, app_developer +:learning-objective-1: Configure Continue.dev to connect to AI Gateway for chat and autocomplete +:learning-objective-2: Set up MCP server integration through AI Gateway +:learning-objective-3: Optimize Continue.dev settings for cost and performance + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +After xref:ai-agents:ai-gateway/gateway-quickstart.adoc[configuring your AI Gateway], set up Continue.dev to route LLM requests and access MCP tools through the gateway's unified endpoints. + +After reading this page, you will be able to: + +* [ ] Configure Continue.dev to connect to AI Gateway for chat and autocomplete. +* [ ] Set up MCP server integration through AI Gateway. +* [ ] Optimize Continue.dev settings for cost and performance. + +== Prerequisites + +Before configuring Continue.dev, ensure you have: + +* Continue.dev extension installed in your code editor: +** VS Code: Search for "Continue" in Extensions +** JetBrains IDEs: Install from the JetBrains Marketplace +* An active Redpanda AI Gateway with: +** At least one LLM provider enabled (see xref:ai-agents:ai-gateway/gateway-quickstart.adoc#step-1-enable-a-provider[Enable a provider]) +** A gateway created and configured (see xref:ai-agents:ai-gateway/gateway-quickstart.adoc#step-3-create-a-gateway[Create a gateway]) +* Your AI Gateway credentials: +** Gateway endpoint URL (for example, `\https://gw.ai.panda.com`) +** API key with access to the gateway + +== About Continue.dev + +Continue.dev is an open-source AI coding assistant that integrates with VS Code and JetBrains IDEs. It provides: + +* Chat interface for code questions and generation +* Tab autocomplete powered by LLMs +* Codebase indexing for context-aware suggestions +* Slash commands for common workflows +* Extensible architecture with custom context providers + +By routing Continue.dev through AI Gateway, you gain centralized observability, cost controls, and the ability to aggregate multiple MCP servers into a single interface. + +== Configuration files + +Continue.dev supports two configuration file formats: + +* `config.json` (legacy format) +* `config.yaml` (recommended format) + +Both files are stored in the same location: + +* VS Code: `~/.continue/` +* JetBrains: `~/.continue/` + +Create the directory if it doesn't exist: + +[,bash] +---- +mkdir -p ~/.continue +---- + +=== Choose a configuration format + +[cols="1,2,2"] +|=== +|Format |Use when |Limitations + +|`config.json` +|You need basic LLM configuration without MCP servers +|Does not support MCP server configuration or environment variable interpolation + +|`config.yaml` +|You need MCP server integration or environment variable interpolation +|Requires Continue.dev version that supports YAML (recent versions) +|=== + +TIP: Use `config.yaml` for new setups to take advantage of MCP server integration and the `${{ secrets.* }}` environment variable syntax. + +== Basic configuration + +Create or edit `~/.continue/config.json` with the following structure to connect to AI Gateway: + +[,json] +---- +{ + "models": [ + { + "title": "Redpanda AI Gateway - Claude", + "provider": "anthropic", + "model": "claude-sonnet-4.5", + "apiKey": "YOUR_REDPANDA_API_KEY", + "apiBase": "" + } + ] +} +---- + +Replace placeholder values: + +* `YOUR_REDPANDA_API_KEY` - Your Redpanda API key + +The `provider` field tells Continue.dev which SDK to use (Anthropic format), while `apiBase` routes the request through your gateway. The gateway then forwards the request to the appropriate provider based on the model name. + +== Configure multiple models + +Continue.dev can switch between different models for different tasks. Configure multiple models to optimize for quality and cost: + +[,json] +---- +{ + "models": [ + { + "title": "Gateway - Claude Sonnet (default)", + "provider": "anthropic", + "model": "claude-sonnet-4.5", + "apiKey": "YOUR_REDPANDA_API_KEY", + "apiBase": "" + }, + { + "title": "Gateway - Claude Opus (complex tasks)", + "provider": "anthropic", + "model": "claude-opus-4.6", + "apiKey": "YOUR_REDPANDA_API_KEY", + "apiBase": "" + }, + { + "title": "Gateway - GPT-5.2", + "provider": "openai", + "model": "gpt-5.2", + "apiKey": "YOUR_REDPANDA_API_KEY", + "apiBase": "" + } + ] +} +---- + +Switch between models in Continue.dev's chat interface by clicking the model selector dropdown. + +== Configure tab autocomplete + +Continue.dev supports a separate model for tab autocomplete, which generates code suggestions as you type. Use a faster, cost-effective model for autocomplete: + +[,json] +---- +{ + "models": [ + { + "title": "Gateway - Claude Sonnet", + "provider": "anthropic", + "model": "claude-sonnet-4.5", + "apiKey": "YOUR_REDPANDA_API_KEY", + "apiBase": "" + } + ], + "tabAutocompleteModel": { + "title": "Gateway - Claude Haiku (autocomplete)", + "provider": "anthropic", + "model": "claude-haiku", + "apiKey": "YOUR_REDPANDA_API_KEY", + "apiBase": "" + } +} +---- + +This configuration uses Claude Sonnet for chat interactions and Claude Haiku for autocomplete. Haiku provides faster responses at lower cost, which is ideal for autocomplete where speed matters more than reasoning depth. + +== Configure with OpenAI provider format + +AI Gateway supports both native provider formats and OpenAI-compatible format. If you prefer using the OpenAI format for all models, configure Continue.dev with the `openai` provider: + +[,json] +---- +{ + "models": [ + { + "title": "Gateway - Claude Sonnet (OpenAI format)", + "provider": "openai", + "model": "anthropic/claude-sonnet-4.5", + "apiKey": "YOUR_REDPANDA_API_KEY", + "apiBase": "/v1" + }, + { + "title": "Gateway - GPT-5.2 (OpenAI format)", + "provider": "openai", + "model": "openai/gpt-5.2", + "apiKey": "YOUR_REDPANDA_API_KEY", + "apiBase": "/v1" + } + ] +} +---- + +When using OpenAI provider format: + +* Set `provider` to `"openai"` +* Add `/v1` to the `apiBase` URL +* Use the `vendor/model_id` format for model names (for example, `anthropic/claude-sonnet-4.5`) + +== Configure MCP server integration + +Connect Continue.dev to your AI Gateway's MCP endpoint to aggregate tools from multiple MCP servers. + +Add the MCP configuration to `config.yaml`: + +[,yaml] +---- +models: + - title: Gateway - Claude Sonnet + provider: anthropic + model: claude-sonnet-4.5 + apiKey: YOUR_REDPANDA_API_KEY + apiBase: + +mcpServers: + - transport: + type: streamable-http + url: /mcp + headers: + Authorization: Bearer YOUR_REDPANDA_API_KEY +---- + +After adding this configuration: + +. Restart Continue.dev (reload your editor window) +. Click the tools icon in the Continue.dev sidebar +. Verify that tools from your configured MCP servers appear + +If using deferred tool loading in your gateway, you'll see a search tool and MCP orchestrator tool instead of all tools upfront. + +[[configure-env-vars]] +== Configure with environment variables + +For sensitive credentials or multi-environment setups, use Continue.dev's secrets interpolation in `config.yaml`. + +IMPORTANT: Environment variable interpolation is only supported in `config.yaml` files. The `config.json` format does not support any form of variable substitution - all values must be hardcoded. + +[,yaml] +---- +models: + - title: Gateway - Claude Sonnet + provider: anthropic + model: claude-sonnet-4.5 + apiKey: ${{ secrets.REDPANDA_API_KEY }} + apiBase: ${{ secrets.REDPANDA_GATEWAY_URL }} + +mcpServers: + - transport: + type: streamable-http + url: ${{ secrets.REDPANDA_GATEWAY_URL }}/mcp + headers: + Authorization: Bearer ${{ secrets.REDPANDA_API_KEY }} +---- + +IMPORTANT: Continue.dev uses the `${{ secrets.* }}` syntax for interpolation in `config.yaml`. Do not use the `${VAR}` shell syntax - Continue.dev treats it as a literal string rather than performing substitution. + +Set secrets in Continue.dev settings: + +. Open Continue.dev settings in your IDE +. Navigate to the "Secrets" section +. Add the following secrets: ++ +* `REDPANDA_GATEWAY_URL`: Your gateway endpoint URL +* `REDPANDA_API_KEY`: `your-api-key` + +== Project-level configuration + +Override global settings for specific projects by creating `.continuerc.json` in your project root: + +[,json] +---- +{ + "models": [ + { + "title": "Project Gateway - Claude Haiku", + "provider": "anthropic", + "model": "claude-haiku", + "apiKey": "your_project_api_key_here", + "apiBase": "" + } + ] +} +---- + +IMPORTANT: `.continuerc.json` does not support environment variable interpolation. You must hardcode values in this file. For dynamic configuration, use `~/.continue/config.yaml` with `${{ secrets.* }}` syntax (see <>) or create a `~/.continue/config.ts` file for programmatic environment access. + +Project-level configuration takes precedence over global configuration. Use this to: + +* Route different projects through different gateways +* Use cost-effective models for internal projects +* Use premium models for customer-facing projects +* Separate billing between projects + +== Verify configuration + +After configuring Continue.dev, verify it connects correctly to your AI Gateway. + +=== Test chat interface + +. Open Continue.dev sidebar in your editor +. Type a simple question: "What does this function do?" (with a file open) +. Wait for response + +Then verify in the AI Gateway dashboard: + +. Open the Redpanda Cloud Console +. Navigate to your gateway's observability dashboard +. Filter by gateway ID +. Verify: +** Request appears in logs +** Model shows correct format (for example, `claude-sonnet-4.5` for Anthropic native or `anthropic/claude-sonnet-4.5` for OpenAI format) +** Token usage and cost are recorded + +If the request doesn't appear, see <>. + +=== Test tab autocomplete + +. Open a code file in your editor +. Start typing a function or class definition +. Wait for autocomplete suggestions to appear + +Autocomplete requests also appear in the gateway dashboard, typically with: + +* Lower token counts than chat requests +* Higher request frequency +* The autocomplete model you configured + +=== Test MCP tool integration + +If you configured MCP servers: + +. Open Continue.dev chat +. Ask a question that requires a tool: "What's the weather forecast?" +. Continue.dev should: +** Discover the tool from the MCP server +** Invoke it with correct parameters +** Return the result + +Check the gateway dashboard for MCP tool invocation logs. + +== Advanced configuration + +=== Custom request headers + +Add custom headers for request tracking or routing: + +[,json] +---- +{ + "models": [ + { + "title": "Gateway - Claude Sonnet", + "provider": "anthropic", + "model": "claude-sonnet-4.5", + "apiKey": "YOUR_REDPANDA_API_KEY", + "apiBase": "", + "requestOptions": { + "headers": { + "x-user-id": "developer-123", + "x-project": "main-app" + } + } + } + ] +} +---- + +Use these headers with gateway CEL routing to: + +* Track costs per developer +* Route based on project type +* Apply different rate limits per user + +=== Temperature and max tokens + +Configure model parameters for different behaviors: + +[,json] +---- +{ + "models": [ + { + "title": "Gateway - Precise (low temperature)", + "provider": "anthropic", + "model": "claude-sonnet-4.5", + "apiKey": "YOUR_REDPANDA_API_KEY", + "apiBase": "", + "completionOptions": { + "temperature": 0.2, + "maxTokens": 2048 + } + }, + { + "title": "Gateway - Creative (high temperature)", + "provider": "anthropic", + "model": "claude-sonnet-4.5", + "apiKey": "YOUR_REDPANDA_API_KEY", + "apiBase": "", + "completionOptions": { + "temperature": 0.8, + "maxTokens": 4096 + } + } + ] +} +---- + +* Lower temperature (0.0-0.3): More deterministic, better for code generation +* Higher temperature (0.7-1.0): More creative, better for brainstorming +* `maxTokens`: Limit response length to control costs + +=== Context providers + +Configure which code context Continue.dev includes in requests: + +[,json] +---- +{ + "models": [ + { + "title": "Gateway - Claude Sonnet", + "provider": "anthropic", + "model": "claude-sonnet-4.5", + "apiKey": "YOUR_REDPANDA_API_KEY", + "apiBase": "" + } + ], + "contextProviders": [ + { + "name": "code", + "params": { + "maxFiles": 5 + } + }, + { + "name": "diff" + }, + { + "name": "terminal" + } + ] +} +---- + +Available context providers: + +* `code`: Includes open files and highlighted code +* `diff`: Includes git diff of current changes +* `terminal`: Includes recent terminal output +* `problems`: Includes editor warnings and errors +* `folder`: Includes file tree structure + +Limiting context providers reduces token usage and costs. + +=== Slash commands + +Configure custom slash commands for common workflows: + +[,json] +---- +{ + "models": [ + { + "title": "Gateway - Claude Sonnet", + "provider": "anthropic", + "model": "claude-sonnet-4.5", + "apiKey": "YOUR_REDPANDA_API_KEY", + "apiBase": "" + } + ], + "slashCommands": [ + { + "name": "review", + "description": "Review code for bugs and improvements", + "prompt": "Review this code for potential bugs, performance issues, and suggest improvements. Focus on:\n- Error handling\n- Edge cases\n- Code clarity\n\n{{{ input }}}" + }, + { + "name": "test", + "description": "Generate unit tests", + "prompt": "Generate comprehensive unit tests for this code. Include:\n- Happy path tests\n- Edge case tests\n- Error handling tests\n\n{{{ input }}}" + } + ] +} +---- + +Use slash commands in Continue.dev chat: + +* `/review` - Triggers code review prompt +* `/test` - Generates tests + +Custom commands help standardize prompts across teams and reduce token costs by avoiding repetitive instruction typing. + +[[troubleshooting]] +== Troubleshooting + +=== Continue.dev shows connection error + +**Symptom**: Continue.dev displays "Failed to connect" or requests return errors. + +**Causes and solutions**: + +. **Incorrect apiBase URL** ++ +Verify the URL format matches your provider choice: ++ +[,text] +---- +# Anthropic/native format (no /v1) +"apiBase": "" + +# OpenAI format (with /v1) +"apiBase": "/v1" +---- + +. **Provider mismatch** ++ +Ensure the `provider` field matches the API format you're using: ++ +* Native Anthropic: `"provider": "anthropic"` with no `/v1` in URL +* Native OpenAI: `"provider": "openai"` with `/v1` in URL +* OpenAI-compatible: `"provider": "openai"` with `/v1` in URL + +. **Authentication failure** ++ +Verify your API key is valid: ++ +[,bash] +---- +curl -H "Authorization: Bearer YOUR_API_KEY" \ + /v1/models +---- ++ +You should receive a list of available models. If you get `401 Unauthorized`, regenerate your API key in the Redpanda Cloud Console. + +. **Invalid JSON syntax** ++ +Validate your `config.json` file: ++ +[,bash] +---- +python3 -m json.tool ~/.continue/config.json +---- ++ +Fix any syntax errors reported. + +=== Autocomplete not working + +**Symptom**: Tab autocomplete suggestions don't appear or are very slow. + +**Causes and solutions**: + +. **No autocomplete model configured** ++ +Verify `tabAutocompleteModel` is set in `config.json`. If missing, Continue.dev may fall back to chat model, which is slower and more expensive. + +. **Model too slow** ++ +Use a faster model for autocomplete: ++ +[,json] +---- +{ + "tabAutocompleteModel": { + "title": "Gateway - Claude Haiku", + "provider": "anthropic", + "model": "claude-haiku", + "apiKey": "YOUR_API_KEY", + "apiBase": "" + } +} +---- + +. **Network latency** ++ +Check gateway latency in the observability dashboard. If p95 latency is over 500ms, autocomplete will feel slow. Consider: ++ +* Using a gateway in a closer geographic region +* Switching to a faster model (Haiku over Sonnet) + +. **Autocomplete disabled** ++ +Check Continue.dev settings in your editor: ++ +* VS Code: Settings → Continue → Enable Tab Autocomplete +* JetBrains: Settings → Tools → Continue → Enable Autocomplete + +=== MCP tools not appearing + +**Symptom**: Continue.dev doesn't show tools from the MCP server. + +**Causes and solutions**: + +. **MCP configuration missing** ++ +Verify the `mcpServers` section exists in `config.yaml`. + +. **Incorrect MCP endpoint** ++ +The MCP URL should be `{gateway-url}/mcp`: ++ +[,text] +---- +# Correct +"url": "/mcp" + +# Incorrect +"url": "" +---- + +. **No MCP servers in gateway** ++ +Verify your gateway has at least one MCP server configured in the AI Gateway UI. + +. **Deferred tool loading enabled** ++ +If deferred tool loading is enabled, you'll see only a search tool initially. This is expected behavior. + +. **Editor restart needed** ++ +MCP configuration changes require reloading the editor window: ++ +* VS Code: Command Palette → Developer: Reload Window +* JetBrains: File → Invalidate Caches / Restart + +=== Requests not appearing in gateway dashboard + +**Symptom**: Continue.dev works, but requests don't appear in the AI Gateway observability dashboard. + +**Causes and solutions**: + +. **Wrong gateway endpoint** ++ +Verify that the `apiBase` URL matches the gateway endpoint you're viewing in the dashboard. + +. **Using direct provider connection** ++ +If `apiBase` points directly to a provider (for example, `https://api.anthropic.com`), requests won't route through the gateway. Verify it points to your gateway endpoint. + +. **Log ingestion delay** ++ +Gateway logs can take 5-10 seconds to appear in the dashboard. Wait briefly and refresh. + +=== High token costs + +**Symptom**: Continue.dev uses more tokens than expected, resulting in high costs. + +**Causes and solutions**: + +. **Too much context included** ++ +Continue.dev may be including too many files. Solutions: ++ +* Limit `maxFiles` in context providers +* Use `.continueignore` file to exclude unnecessary directories +* Close unused editor tabs before using Continue.dev + +. **Autocomplete using expensive model** ++ +Verify you're using a cost-effective model for autocomplete: ++ +[,json] +---- +{ + "tabAutocompleteModel": { + "provider": "anthropic", + "model": "claude-haiku" + } +} +---- + +. **Model parameters too high** ++ +Reduce `maxTokens` in `completionOptions` to limit response length: ++ +[,json] +---- +{ + "completionOptions": { + "maxTokens": 2048 + } +} +---- + +. **MCP overhead** ++ +If not using deferred tool loading, all tools load with every request. Enable deferred tool loading in your AI Gateway configuration (see xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[]). + +=== Configuration changes not taking effect + +**Symptom**: Changes to `config.json` don't apply. + +**Solutions**: + +. **Reload editor window** ++ +Configuration changes require reloading: ++ +* VS Code: Command Palette → Developer: Reload Window +* JetBrains: File → Invalidate Caches / Restart + +. **Invalid JSON syntax** ++ +Validate JSON syntax: ++ +[,bash] +---- +python3 -m json.tool ~/.continue/config.json +---- + +. **Project config overriding** ++ +Check if `.continuerc.json` in your project root overrides global settings. + +. **File permissions** ++ +Verify Continue.dev can read the config file: ++ +[,bash] +---- +ls -la ~/.continue/config.json +---- ++ +Fix permissions if needed: ++ +[,bash] +---- +chmod 600 ~/.continue/config.json +---- + +== Cost optimization tips + +=== Use different models for chat and autocomplete + +Chat interactions benefit from reasoning depth, while autocomplete needs speed: + +[,json] +---- +{ + "models": [ + { + "title": "Gateway - Claude Sonnet", + "provider": "anthropic", + "model": "claude-sonnet-4.5" + } + ], + "tabAutocompleteModel": { + "title": "Gateway - Claude Haiku", + "provider": "anthropic", + "model": "claude-haiku" + } +} +---- + +This can reduce costs by 5-10x for autocomplete while maintaining quality for chat. + +=== Limit context window size + +Reduce the amount of code included in requests: + +Create `.continueignore` in your project root: + +[,text] +---- +# Exclude build artifacts +dist/ +build/ +node_modules/ + +# Exclude tests when not working on tests +**/*.test.* +**/*.spec.* + +# Exclude documentation +docs/ +*.md + +# Exclude large data files +*.json +*.csv +---- + +Then limit files in `config.json`: + +[,json] +---- +{ + "contextProviders": [ + { + "name": "code", + "params": { + "maxFiles": 3 + } + } + ] +} +---- + +=== Use MCP tools for documentation + +Instead of pasting documentation into chat, create MCP tools that fetch relevant sections on-demand. This reduces token costs by including only needed information. + +=== Monitor usage patterns + +Use the AI Gateway dashboard to identify optimization opportunities: + +. Navigate to your gateway's observability dashboard +. Filter by Continue.dev requests (use custom header if configured) +. Analyze: +** Token usage per request type (chat vs autocomplete) +** Most expensive queries +** High-frequency low-value requests + +=== Set model-specific limits + +Prevent runaway costs by configuring `maxTokens`: + +[,json] +---- +{ + "models": [ + { + "title": "Gateway - Claude Sonnet", + "provider": "anthropic", + "model": "claude-sonnet-4.5", + "completionOptions": { + "maxTokens": 2048 + } + } + ], + "tabAutocompleteModel": { + "completionOptions": { + "maxTokens": 256 + } + } +} +---- + +Autocomplete rarely needs more than 256 tokens, while chat responses can vary. + +== Next steps + +* xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[]: Configure deferred tool loading to reduce token costs +* xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[]: Use CEL expressions to route Continue.dev requests based on context + +== Related pages + +* xref:ai-agents:ai-gateway/gateway-quickstart.adoc[]: Create and configure your AI Gateway +* xref:ai-agents:ai-gateway/gateway-architecture.adoc[]: Learn about AI Gateway architecture and benefits +* xref:ai-agents:ai-gateway/integrations/claude-code-user.adoc[]: Configure Claude Code with AI Gateway +* xref:ai-agents:ai-gateway/integrations/cline-user.adoc[]: Configure Cline with AI Gateway diff --git a/modules/ai-agents/partials/integrations/cursor-admin.adoc b/modules/ai-agents/partials/integrations/cursor-admin.adoc new file mode 100644 index 000000000..5f9ad8ab1 --- /dev/null +++ b/modules/ai-agents/partials/integrations/cursor-admin.adoc @@ -0,0 +1,814 @@ += Configure AI Gateway for Cursor IDE +:description: Configure Redpanda AI Gateway to support Cursor IDE clients. +:page-topic-type: how-to +:personas: platform_admin +:learning-objective-1: Configure AI Gateway endpoints for Cursor IDE connectivity +:learning-objective-2: Set up OpenAI-compatible transforms for multi-provider routing +:learning-objective-3: Deploy multi-tenant authentication strategies for Cursor clients + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +Configure Redpanda AI Gateway to support Cursor IDE clients accessing multiple LLM providers and MCP tools through OpenAI-compatible endpoints. + +After reading this page, you will be able to: + +* [ ] Configure AI Gateway endpoints for Cursor IDE connectivity. +* [ ] Set up OpenAI-compatible transforms for multi-provider routing. +* [ ] Deploy multi-tenant authentication strategies for Cursor clients. + +== Prerequisites + +* AI Gateway deployed on a BYOC cluster running Redpanda version 25.3 or later +* Administrator access to the AI Gateway UI +* API keys for at least one LLM provider (Anthropic, OpenAI, or others) +* Understanding of xref:ai-agents:ai-gateway/gateway-architecture.adoc[AI Gateway concepts] + +== About Cursor IDE + +Cursor is an AI-powered code editor built on VS Code that integrates multiple LLM providers for code completion, chat, and inline editing. Unlike other AI assistants, Cursor uses OpenAI's API format for all providers and routes to different models using a `vendor/model` prefix notation. + +Key characteristics: + +* Sends all requests in OpenAI-compatible format to `/v1/chat/completions` +* Routes using model prefixes (for example, `openai/gpt-5.2`, `anthropic/claude-sonnet-4.5`) +* Limited support for custom headers (makes multi-tenant deployments challenging) +* Supports MCP protocol with a 40-tool limit +* Built-in code completion and chat modes +* Configuration via settings file (`~/.cursor/config.json`) + +== Architecture overview + +Cursor IDE connects to AI Gateway through standardized endpoints: + +* LLM endpoint: `https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1/chat/completions` for all providers +* MCP endpoint: `https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/mcp` for tool discovery and execution + +The gateway handles: + +. Authentication via bearer tokens in the `Authorization` header +. Gateway selection via the endpoint URL +. Model routing using vendor prefixes (for example, `anthropic/claude-sonnet-4.5`) +. Format transforms from OpenAI format to provider-native formats (for Anthropic, Google, etc.) +. MCP server aggregation for multi-tool workflows +. Request logging and cost tracking per gateway + +== Enable LLM providers + +Cursor IDE works with multiple providers through OpenAI-compatible transforms. Enable the providers your users will access. + +=== Configure Anthropic with OpenAI-compatible format + +Cursor sends OpenAI-formatted requests but can route to Anthropic models. Configure the gateway to transform these requests: + +. Navigate to *AI Gateway* > *Providers* in the Redpanda Cloud console +. Select *Anthropic* from the provider list +. Click *Add configuration* +. Enter your Anthropic API key +. Under *Format*, select *OpenAI-compatible* (enables automatic transform) +. Click *Save* + +The gateway now transforms OpenAI-format requests to Anthropic's native `/v1/messages` format. + +=== Configure OpenAI + +To enable OpenAI as a provider: + +. Navigate to *AI Gateway* > *Providers* +. Select *OpenAI* from the provider list +. Click *Add configuration* +. Enter your OpenAI API key +. Under *Format*, select *Native OpenAI* +. Click *Save* + +=== Configure additional providers + +Cursor supports many providers through OpenAI-compatible transforms. For each provider: + +. Add the provider configuration in the gateway +. Set the format to *OpenAI-compatible* (the gateway handles format transformation) +. Enable the transform layer to convert OpenAI request format to the provider's native format + +Common additional providers: + +* Google Gemini (requires OpenAI-compatible transform) +* Mistral AI (already OpenAI-compatible format) +* Together AI (already OpenAI-compatible format) + +=== Enable models in the catalog + +After enabling providers, enable specific models: + +. Navigate to *AI Gateway* > *Models* +. Enable the models you want Cursor clients to access ++ +Common models for Cursor: ++ +* `anthropic/claude-opus-4.6-5` +* `anthropic/claude-sonnet-4.5` +* `openai/gpt-5.2` +* `openai/gpt-5.2-mini` +* `openai/o1-mini` + +. Click *Save* + +Cursor uses the `vendor/model_id` format in requests. The gateway maps these to provider endpoints and applies the appropriate format transforms. + +== Create a gateway for Cursor clients + +Create a dedicated gateway to isolate Cursor traffic and apply specific policies. + +=== Gateway configuration + +. Navigate to *AI Gateway* > *Gateways* +. Click *Create Gateway* +. Enter gateway details: ++ +[cols="1,2"] +|=== +|Field |Value + +|Name +|`cursor-gateway` (or your preferred name) + +|Workspace +|Select the workspace for access control grouping + +|Description +|Gateway for Cursor IDE clients +|=== + +. Click *Create* +. Copy the gateway ID from the gateway details page + +The gateway ID is embedded in the gateway endpoint URL. + +=== Configure unified LLM routing + +Cursor sends all requests to a single endpoint (`/v1/chat/completions`) and uses model prefixes for routing. Configure the gateway to route based on the requested model prefix. + +==== Model prefix routing + +Configure routing that inspects the model field to determine the target provider: + +. Navigate to the gateway's *LLM* tab +. Under *Routing*, click *Add route* +. Configure Anthropic routing: ++ +[source,cel] +---- +request.body.model.startsWith("anthropic/") +---- + +. Add a *Primary provider pool*: ++ +* Provider: Anthropic +* Model: All enabled Anthropic models +* Transform: OpenAI to Anthropic +* Load balancing: Round robin (if multiple Anthropic configurations exist) + +. Click *Save* +. Add another route for OpenAI: ++ +[source,cel] +---- +request.body.model.startsWith("openai/") +---- + +. Add a *Primary provider pool*: ++ +* Provider: OpenAI +* Model: All enabled OpenAI models +* Transform: None (already OpenAI format) + +. Click *Save* + +Cursor requests route to the appropriate provider based on the model prefix. + +==== Default routing with fallback + +Configure a catch-all route for requests without vendor prefixes: + +[source,cel] +---- +true # Matches all requests not matched by previous routes +---- + +Add a primary provider (for example, OpenAI) with fallback to Anthropic: + +* Primary: OpenAI (for requests with no prefix) +* Fallback: Anthropic (if OpenAI is unavailable) +* Failover conditions: Rate limits, timeouts, 5xx errors + +=== Apply rate limits + +Prevent runaway usage from Cursor clients: + +. Navigate to the gateway's *LLM* tab +. Under *Rate Limit*, configure: ++ +[cols="1,2"] +|=== +|Setting |Recommended Value + +|Global rate limit +|150 requests per minute + +|Per-user rate limit +|15 requests per minute (if using user identification workarounds) +|=== + +. Click *Save* + +The gateway blocks requests exceeding these limits and returns HTTP 429 errors. + +==== Rate limit considerations for code completion + +Cursor's code completion feature generates frequent requests. Consider separate rate limits for completion vs chat: + +* Completion models (for example, `openai/gpt-5.2-mini`): Higher rate limits +* Chat models (for example, `anthropic/claude-sonnet-4.5`): Standard rate limits + +Configure routing rules that apply different rate limits based on model selection. + +=== Set spending limits + +Control LLM costs across all providers: + +. Under *Spend Limit*, configure: ++ +[cols="1,2"] +|=== +|Setting |Value + +|Monthly budget +|$7,000 (adjust based on expected usage) + +|Enforcement +|Block requests after budget exceeded + +|Alert threshold +|80% of budget (sends notification) +|=== + +. Click *Save* + +The gateway tracks estimated costs per request across all providers and blocks traffic when the monthly budget is exhausted. + +== Configure MCP tool aggregation + +Enable Cursor to discover and use tools from multiple MCP servers through a single endpoint. Note that Cursor has a 40-tool limit, so carefully select which MCP servers to aggregate. + +=== Add MCP servers + +. Navigate to the gateway's *MCP* tab +. Click *Add MCP Server* +. Enter server details: ++ +[cols="1,2"] +|=== +|Field |Value + +|Display name +|Descriptive name (for example, `redpanda-data-tools`, `code-search-tools`) + +|Endpoint URL +|MCP server endpoint (for example, xref:ai-agents:mcp/remote/overview.adoc[Remote MCP server] URL) + +|Authentication +|Bearer token or other authentication mechanism +|=== + +. Click *Save* + +Repeat for each MCP server you want to aggregate, keeping in mind the 40-tool limit. + +=== Work within the 40-tool limit + +Cursor imposes a 40-tool limit on MCP integrations. To stay within this limit: + +* Aggregate only essential MCP servers +* Use deferred tool loading (see next section) +* Prioritize high-value tools over comprehensive tool sets +* Consider creating multiple gateways with different tool sets for different use cases + +Monitor the total tool count across all aggregated MCP servers: + +. Navigate to the gateway's *MCP* tab +. Review the *Total Tools* count displayed at the top +. If the count exceeds 40, remove low-priority MCP servers + +=== Enable deferred tool loading + +Reduce the effective tool count by deferring tool discovery: + +. Under *MCP Settings*, enable *Deferred tool loading* +. Click *Save* + +When enabled: + +* Cursor initially receives only a search tool and orchestrator tool (2 tools total) +* Cursor queries for specific tools by name when needed +* The underlying MCP servers can provide more than 40 tools, but only the search and orchestrator tools count against the limit +* Token usage decreases by 80-90% for configurations with many tools + +Deferred tool loading is the recommended approach for Cursor deployments with multiple MCP servers. + +=== Add the MCP orchestrator + +The MCP orchestrator reduces multi-step workflows to single calls: + +. Under *MCP Settings*, enable *MCP Orchestrator* +. Configure: ++ +[cols="1,2"] +|=== +|Setting |Value + +|Orchestrator model +|Select a model with strong code generation capabilities (for example, `anthropic/claude-sonnet-4.5`) + +|Execution timeout +|30 seconds + +|Backend +|Select the Anthropic backend (orchestrator works best with Claude models) +|=== + +. Click *Save* + +Cursor can now invoke the orchestrator tool to execute complex, multi-step operations in a single request. + +== Configure authentication + +Cursor clients authenticate using bearer tokens in the `Authorization` header. + +=== Generate API tokens + +. Navigate to *Security* > *API Tokens* in the Redpanda Cloud console +. Click *Create Token* +. Enter token details: ++ +[cols="1,2"] +|=== +|Field |Value + +|Name +|`cursor-access` + +|Scopes +|`ai-gateway:read`, `ai-gateway:write` + +|Expiration +|Set appropriate expiration based on security policies +|=== + +. Click *Create* +. Copy the token (it appears only once) + +Distribute this token to Cursor users through secure channels. + +=== Token rotation + +Implement token rotation for security: + +. Create a new token before the existing token expires +. Distribute the new token to users +. Monitor usage of the old token in (observability dashboard) +. Revoke the old token after all users have migrated + +== Multi-tenant deployment strategies + +For organizations with multiple teams, use one of these multi-tenant strategies. + +=== Strategy 1: Tenant-specific subdomains + +Configure different subdomains for each tenant or team: + +. Set up DNS records pointing to your AI Gateway cluster: ++ +* `team-alpha.aigateway.example.com` → Gateway ID: `alpha-cursor-gateway` +* `team-beta.aigateway.example.com` → Gateway ID: `beta-cursor-gateway` + +. Configure the gateway to extract tenant identity from the `Host` header: ++ +[source,cel] +---- +request.headers["host"][0].startsWith("team-alpha") +---- + +. Distribute tenant-specific URLs to each team +. Each team configures Cursor with their specific subdomain + +This approach works with standard Cursor configuration without requiring custom headers. + +**Configuration example for Team Alpha:** + +[source,json] +---- +{ + "apiProvider": "openai", + "apiBaseUrl": "https://team-alpha.aigateway.example.com/ai-gateway/v1", + "apiKey": "TEAM_ALPHA_TOKEN" +} +---- + +=== Strategy 2: Path-based routing + +Use URL path prefixes to identify tenants: + +. Configure gateway routing to extract tenant from the request path: ++ +[source,cel] +---- +request.path.startsWith("/ai-gateway/alpha/") +---- + +. Create routing rules that map path prefixes to specific gateways or policies +. Distribute tenant-specific base URLs + +**Configuration example for Team Alpha:** + +[source,json] +---- +{ + "apiProvider": "openai", + "apiBaseUrl": "https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/alpha/v1", + "apiKey": "TEAM_ALPHA_TOKEN" +} +---- + +This approach requires gateway-level path rewriting to remove the tenant prefix before forwarding to LLM providers. + +=== Strategy 3: Query parameter routing + +Embed tenant identity in query parameters: + +. Configure Cursor to append query parameters to the base URL: ++ +[source,json] +---- +{ + "apiProvider": "openai", + "apiBaseUrl": "https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1?tenant=alpha", + "apiKey": "TEAM_ALPHA_TOKEN" +} +---- + +. Configure gateway routing to extract tenant from query parameters: ++ +[source,cel] +---- +request.url.query["tenant"][0] == "alpha" +---- + +. Create routing rules and rate limits based on the tenant parameter + +This approach works with standard Cursor configuration but exposes tenant identity in URLs. + +=== Strategy 4: API token-based routing + +Use different API tokens to identify tenants: + +. Generate separate API tokens for each tenant +. Tag tokens with metadata indicating the tenant +. Configure gateway routing based on token identity: ++ +[source,cel] +---- +request.auth.metadata["tenant"] == "alpha" +---- + +. Apply tenant-specific routing, rate limits, and spending limits + +This approach is most transparent to users but requires gateway support for token metadata inspection. + +=== Choosing a multi-tenant strategy + +[cols="1,2,2,1"] +|=== +|Strategy |Pros |Cons |Best For + +|Subdomains +|Clean, standards-based, no URL modifications +|Requires DNS configuration, certificate management +|Organizations with infrastructure control + +|Path-based +|No DNS required, flexible routing +|Requires path rewriting, tenant exposed in logs +|Simpler deployments, testing environments + +|Query parameters +|No infrastructure changes +|Tenant exposed in URLs and logs, less clean +|Quick deployments, temporary solutions + +|Token-based +|Transparent to users, centralized control +|Requires advanced gateway features +|Large deployments, strong security requirements +|=== + +== Configure Cursor IDE clients + +Provide these instructions to users configuring Cursor IDE. + +=== Configuration file location + +Cursor uses a JSON configuration file: + +* macOS: `~/.cursor/config.json` +* Linux: `~/.cursor/config.json` +* Windows: `%USERPROFILE%\.cursor\config.json` + +=== Basic configuration + +Users configure Cursor with the AI Gateway endpoint: + +[source,json] +---- +{ + "apiProvider": "openai", + "apiBaseUrl": "https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1", + "apiKey": "YOUR_API_TOKEN", + "models": { + "chat": "anthropic/claude-sonnet-4.5", + "completion": "openai/gpt-5.2-mini" + } +} +---- + +Replace: + +* `{CLUSTER_ID}`: Your Redpanda cluster ID +* `YOUR_API_TOKEN`: The API token generated earlier + +If using a multi-tenant strategy, adjust the `apiBaseUrl` according to your chosen approach (subdomain, path prefix, or query parameter). + +=== Model selection + +Configure different models for different Cursor modes: + +[cols="1,2,1"] +|=== +|Mode |Recommended Model |Reason + +|Chat +|`anthropic/claude-sonnet-4.5` or `openai/gpt-5.2` +|High quality for complex questions + +|Code completion +|`openai/gpt-5.2-mini` +|Fast, cost-effective for frequent requests + +|Inline edit +|`anthropic/claude-sonnet-4.5` +|Balanced quality and speed for code modifications +|=== + +=== MCP server configuration + +Configure Cursor to connect to the aggregated MCP endpoint: + +[source,json] +---- +{ + "experimental": { + "mcpServers": { + "redpanda-ai-gateway": { + "transport": "http", + "url": "https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/mcp", + "headers": { + "Authorization": "Bearer YOUR_API_TOKEN" + } + } + } + } +} +---- + +If using a multi-tenant strategy, ensure the MCP URL matches the tenant configuration. + +This configuration: + +* Connects Cursor to the aggregated MCP endpoint +* Routes LLM requests through the AI Gateway with OpenAI-compatible transforms +* Includes authentication headers + +== Monitor Cursor usage + +Track Cursor activity through gateway observability features. + +=== View request logs + +. Navigate to *AI Gateway* > *Observability* > *Logs* +. Filter by gateway ID: `cursor-gateway` +. Review: ++ +* Request timestamps and duration +* Model used per request (with vendor prefix) +* Token usage (prompt and completion tokens) +* Estimated cost per request +* HTTP status codes and errors +* Transform operations (OpenAI to provider-native format) + +Cursor generates different request patterns: + +* Code completion: Many short requests with low token counts +* Chat: Longer requests with context and multi-turn conversations +* Inline edit: Medium-length requests with code context + +=== Analyze metrics + +. Navigate to *AI Gateway* > *Observability* > *Metrics* +. Select the Cursor gateway +. Review: ++ +[cols="1,2"] +|=== +|Metric |Purpose + +|Request volume by provider +|Identify which providers are most used via model prefix routing + +|Token usage by model +|Track consumption patterns (completion vs chat) + +|Estimated spend by provider +|Monitor costs across providers with transforms + +|Latency (p50, p95, p99) +|Detect transform overhead and provider-specific performance issues + +|Error rate by provider +|Identify failing providers or transform issues + +|Transform success rate +|Monitor OpenAI-to-provider format conversion success +|=== + + +=== Query logs via API + +Programmatically access logs for integration with monitoring systems: + +[source,bash] +---- +curl https://{CLUSTER_ID}.cloud.redpanda.com/api/ai-gateway/logs \ + -H "Authorization: Bearer YOUR_API_TOKEN" \ + -H "Content-Type: application/json" \ + -d '{ + "gateway_id": "GATEWAY_ID", + "start_time": "2026-01-01T00:00:00Z", + "end_time": "2026-01-14T23:59:59Z", + "limit": 100 + }' +---- + +== Security considerations + +Apply these security best practices for Cursor deployments. + +=== Limit token scope + +Create tokens with minimal required scopes: + +* `ai-gateway:read`: Required for MCP tool discovery +* `ai-gateway:write`: Required for LLM requests and tool execution + +Avoid granting broader scopes like `admin` or `cluster:write`. + +=== Implement network restrictions + +If Cursor clients connect from known networks, configure network policies: + +. Use cloud provider security groups to restrict access to AI Gateway endpoints +. Allowlist only the IP ranges where Cursor clients operate +. Monitor for unauthorized access attempts in request logs + +=== Enforce token expiration + +Set short token lifetimes for high-security environments: + +* Development environments: 90 days +* Production environments: 30 days + +Automate token rotation to reduce manual overhead. + +=== Audit tool access + +Review which MCP tools Cursor clients can access: + +. Periodically audit the MCP servers configured in the gateway +. Remove unused or deprecated MCP servers +. Monitor tool execution logs for unexpected behavior +. Ensure total tool count stays within Cursor's 40-tool limit + +=== Protect API keys in configuration + +Cursor stores the API token in plain text in `config.json`. Remind users to: + +* Never commit `config.json` to version control +* Use file system permissions to restrict access (for example, `chmod 600 ~/.cursor/config.json` on Unix-like systems) +* Rotate tokens if they suspect compromise +* Consider using environment variables for API keys (if Cursor supports this) + +=== Monitor transform operations + +Because Cursor requires OpenAI-compatible transforms for non-OpenAI providers: + +. Review transform success rates in metrics +. Monitor for transform failures that may leak request details +. Test transforms thoroughly before production deployment +. Keep transform logic updated as provider APIs evolve + +== Troubleshooting + +Common issues and solutions when configuring AI Gateway for Cursor. + +=== Cursor cannot connect to gateway + +Symptom: Connection errors when Cursor tries to discover tools or send LLM requests. + +Causes and solutions: + +* **Invalid base URL**: Verify `apiBaseUrl` matches the gateway endpoint (including multi-tenant prefix if applicable) +* **Expired token**: Generate a new API token and update the Cursor configuration +* **Network connectivity**: Verify the cluster endpoint is accessible from the client network +* **Provider not enabled**: Ensure at least one provider is enabled and has models in the catalog +* **Wrong gateway endpoint**: Verify the gateway endpoint URL is correct + +=== Model not found errors + +Symptom: Cursor shows "model not found" or similar errors. + +Causes and solutions: + +* **Model not enabled in catalog**: Enable the model in the gateway's model catalog +* **Incorrect model prefix**: Use the correct vendor prefix (for example, `anthropic/claude-sonnet-4.5` not just `claude-sonnet-4.5`) +* **Transform not configured**: Verify OpenAI-compatible transform is enabled for non-OpenAI providers +* **Routing rule mismatch**: Check that routing rules correctly match the model prefix + +=== Transform errors or unexpected responses + +Symptom: Responses are malformed or Cursor reports format errors. + +Causes and solutions: + +* **Transform disabled**: Ensure OpenAI-compatible transform is enabled for Anthropic and other non-OpenAI providers +* **Transform version mismatch**: Verify the transform is compatible with the current provider API version +* **Model-specific transform issues**: Some models may require specific transform configurations +* **Check transform logs**: Review logs for transform errors and stack traces + +=== Tools not appearing in Cursor + +Symptom: Cursor does not discover MCP tools. + +Causes and solutions: + +* **MCP configuration missing**: Ensure `experimental.mcpServers` is configured in Cursor settings +* **MCP servers not configured in gateway**: Add MCP server endpoints in the gateway's MCP tab +* **Exceeds 40-tool limit**: Reduce the number of aggregated tools or enable deferred tool loading +* **Deferred loading enabled but search failing**: Check that the search tool is correctly configured +* **MCP server authentication failing**: Verify MCP server authentication credentials in the gateway configuration + +=== High costs or token usage + +Symptom: Token usage and costs exceed expectations. + +Causes and solutions: + +* **Code completion using expensive model**: Configure completion mode to use `openai/gpt-5.2-mini` instead of larger models +* **Deferred tool loading disabled**: Enable deferred tool loading to reduce tokens by 80-90% +* **No rate limits**: Apply per-minute rate limits to prevent runaway usage +* **Missing spending limits**: Set monthly budget limits with blocking enforcement +* **Chat using wrong model**: Route chat requests to cost-effective models (for example, `anthropic/claude-sonnet-4.5` instead of `anthropic/claude-opus-4.6-5`) +* **Transform overhead**: Monitor if transforms add significant token overhead + +=== Requests failing with 429 errors + +Symptom: Cursor receives HTTP 429 Too Many Requests errors. + +Causes and solutions: + +* **Rate limit exceeded**: Review and increase rate limits if usage is legitimate (code completion needs higher limits) +* **Upstream provider rate limits**: Check if the upstream LLM provider is rate-limiting; configure failover to alternate providers +* **Budget exhausted**: Verify monthly spending limit has not been reached +* **Per-user limits too restrictive**: Adjust per-user rate limits if using multi-tenant strategies + +=== Multi-tenant routing failures + +Symptom: Requests route to wrong gateway or fail authorization. + +Causes and solutions: + +* **Subdomain not configured**: Verify DNS records and SSL certificates for tenant-specific subdomains +* **Path prefix mismatch**: Check that path-based routing rules correctly extract tenant identity +* **Query parameter missing**: Ensure query parameter is appended to all requests +* **Token metadata incorrect**: Verify token is tagged with correct tenant metadata +* **Routing rule conflicts**: Check for overlapping routing rules that may cause unexpected routing + +== Next steps + +* xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[]: Implement advanced routing rules for model prefix routing +* xref:ai-agents:mcp/remote/overview.adoc[]: Deploy Remote MCP servers for custom tools diff --git a/modules/ai-agents/partials/integrations/cursor-user.adoc b/modules/ai-agents/partials/integrations/cursor-user.adoc new file mode 100644 index 000000000..b3b76fb30 --- /dev/null +++ b/modules/ai-agents/partials/integrations/cursor-user.adoc @@ -0,0 +1,819 @@ += Configure Cursor IDE with AI Gateway +:description: Configure Cursor IDE to use Redpanda AI Gateway for unified LLM access, MCP tool integration, and AI-assisted coding. +:page-topic-type: how-to +:personas: ai_agent_developer, app_developer +:learning-objective-1: Configure Cursor IDE to route LLM requests through AI Gateway +:learning-objective-2: Set up MCP server integration for tool access through the gateway +:learning-objective-3: Optimize Cursor settings for multi-tenancy and cost control + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +After xref:ai-agents:ai-gateway/gateway-quickstart.adoc[configuring your AI Gateway], set up Cursor IDE to route LLM requests and access MCP tools through the gateway's unified endpoints. + +After reading this page, you will be able to: + +* [ ] Configure Cursor IDE to route LLM requests through AI Gateway. +* [ ] Set up MCP server integration for tool access through the gateway. +* [ ] Optimize Cursor settings for multi-tenancy and cost control. + +== Prerequisites + +Before configuring Cursor IDE, ensure you have: + +* Cursor IDE installed (download from https://cursor.sh[cursor.sh^]) +* An active Redpanda AI Gateway with: +** At least one LLM provider enabled (see xref:ai-agents:ai-gateway/gateway-quickstart.adoc#step-1-enable-a-provider[Enable a provider]) +** A gateway created and configured (see xref:ai-agents:ai-gateway/gateway-quickstart.adoc#step-3-create-a-gateway[Create a gateway]) +* Your AI Gateway credentials: +** Gateway endpoint URL (for example, `\https://gw.ai.panda.com/v1/gateways/gateway-abc123`) +** API key with access to the gateway + +== About Cursor IDE + +Cursor IDE is an AI-powered code editor built on VS Code that provides: + +* Chat interface for code questions and generation +* AI-powered autocomplete with context awareness +* Codebase indexing for semantic search +* Inline code editing with AI assistance +* Terminal integration for command suggestions +* Native integration with multiple LLM providers + +By routing Cursor through AI Gateway, you gain centralized observability, cost controls, provider flexibility, and the ability to aggregate multiple MCP servers into a single interface. + +== Configuration methods + +Cursor IDE supports two configuration approaches for connecting to AI Gateway: + +[cols="1,2,2"] +|=== +|Method |Best for |Trade-offs + +|Settings UI +|Visual configuration, quick setup +|Limited to single provider configuration + +|Configuration file +|Multiple providers, environment-specific settings, version control +|Manual file editing required +|=== + +Choose the method that matches your workflow. The Settings UI is faster for getting started, while the configuration file provides more flexibility for production use. + +== Configure using Settings UI + +The Settings UI provides a visual interface for configuring Cursor's AI providers. + +=== Configure AI provider + +. Open Cursor Settings: +** macOS: *Cursor* > *Settings* or `Cmd+,` +** Windows/Linux: *File* > *Preferences* > *Settings* or `Ctrl+,` +. Navigate to *Features* > *AI* +. Under *OpenAI API*, configure the base URL and API key: + +[source,text] +---- +Override OpenAI Base URL: +Override OpenAI API Key: YOUR_REDPANDA_API_KEY +---- + +Replace placeholder values: + +* `` - Your gateway endpoint URL from the AI Gateway UI (includes gateway ID in the path) +* `YOUR_REDPANDA_API_KEY` - Your Redpanda API key + +=== Select models + +In the AI settings, configure which models to use: + +. Under *Model Selection*, choose your preferred model from the dropdown +. Cursor will automatically use the gateway endpoint configured above +. Models available depend on what you've enabled in your AI Gateway + +Model selection options: + +* `gpt-5.2` - Routes to OpenAI GPT-5.2 through your gateway +* `gpt-5.2-mini` - Routes to OpenAI GPT-5.2-mini (cost-effective) +* `claude-sonnet-4.5` - Routes to Anthropic Claude Sonnet (if enabled in gateway) +* `claude-opus-4.6` - Routes to Anthropic Claude Opus (if enabled in gateway) + +Note: When routing through AI Gateway, Cursor uses the OpenAI SDK format. The gateway automatically translates requests to the appropriate provider based on the model name. + +== Configure using configuration file + +For more control over provider settings, multi-environment configurations, or version control, edit Cursor's configuration file directly. + +=== Locate configuration file + +Cursor stores configuration in `settings.json`: + +* macOS: `~/Library/Application Support/Cursor/User/settings.json` +* Windows: `%APPDATA%\Cursor\User\settings.json` +* Linux: `~/.config/Cursor/User/settings.json` + +Create the directory structure if it doesn't exist: + +[,bash] +---- +# macOS +mkdir -p ~/Library/Application\ Support/Cursor/User + +# Linux +mkdir -p ~/.config/Cursor/User +---- + +=== Basic configuration + +Create or edit `settings.json` with the following structure: + +[,json] +---- +{ + "cursor.overrideOpenAIBaseUrl": "", + "cursor.overrideOpenAIApiKey": "YOUR_REDPANDA_API_KEY", + "cursor.cpp.defaultModel": "gpt-5.2", + "cursor.chat.defaultModel": "gpt-5.2" +} +---- + +Replace placeholder values: + +* `` - Your gateway endpoint URL from the AI Gateway UI +* `YOUR_REDPANDA_API_KEY` - Your Redpanda API key + +Configuration fields: + +* `cursor.overrideOpenAIBaseUrl` - Gateway endpoint URL (includes gateway ID in the path) +* `cursor.overrideOpenAIApiKey` - Your Redpanda API key (used for authentication) +* `cursor.cpp.defaultModel` - Model for autocomplete (c++ refers to copilot++) +* `cursor.chat.defaultModel` - Model for chat interactions + +=== Multiple environment configuration + +To switch between development and production gateways, use workspace-specific settings. + +Create `.vscode/settings.json` in your project root: + +[,json] +---- +{ + "cursor.overrideOpenAIBaseUrl": "", + "openai.additionalHeaders": { + "x-environment": "staging" + } +} +---- + +Workspace settings override global settings. Use this to: + +* Route different projects through different gateways +* Use cost-effective models for internal projects +* Use premium models for customer-facing projects +* Add project-specific tracking headers + +=== Configuration with environment variables + +For sensitive credentials, avoid hardcoding values in `settings.json`. + +IMPORTANT: VS Code `settings.json` does not support `${VAR}` interpolation - such placeholders will be treated as literal strings. To use environment variables, generate the settings file dynamically with a script. + +==== Option 1: Generate settings.json with a script + +Create a setup script that reads environment variables and writes the actual values to `settings.json`: + +[,bash] +---- +#!/bin/bash +# setup-cursor-config.sh + +# Set your credentials +export REDPANDA_GATEWAY_ENDPOINT="https://gw.ai.panda.com/v1/gateways/gateway-abc123" +export REDPANDA_API_KEY="your-api-key" + +# Generate settings.json +cat > ~/.cursor/settings.json <", + "cursor.overrideOpenAIApiKey": "YOUR_REDPANDA_API_KEY", + "cursor.mcp": { + "servers": { + "redpanda-ai-gateway": { + "command": "node", + "args": [ + "-e", + "require('https').request({hostname:'',path:'//mcp',method:'GET',headers:{'Authorization':'Bearer YOUR_REDPANDA_API_KEY'}}).end()" + ] + } + } + } +} +---- + +This configuration uses Node.js to make HTTPS requests to the gateway's MCP endpoint. The gateway returns tool definitions that Cursor can use. + +Replace placeholder values: + +* `` - Your gateway endpoint URL from the AI Gateway UI +* `` - The hostname portion of your gateway endpoint (for example, `gw.ai.panda.com`) +* `` - The path portion of your gateway endpoint (for example, `v1/gateways/gateway-abc123`) +* `YOUR_REDPANDA_API_KEY` - Your Redpanda API key + +=== Enable deferred tool loading + +To work within Cursor's 40-tool limit, configure deferred tool loading in your AI Gateway: + +. Navigate to your gateway configuration in the AI Gateway UI +. Under *MCP Settings*, enable *Deferred Tool Loading* +. Save the gateway configuration + +When deferred loading is enabled: + +* Cursor receives only the search tool and orchestrator tool initially (2 tools total) +* When you ask Cursor to perform a task requiring a specific tool, it queries the gateway +* The gateway returns only the relevant tool definitions +* Total tool count stays well under the 40-tool limit + +== Verify configuration + +After configuring Cursor IDE, verify it connects correctly to your AI Gateway. + +=== Test chat interface + +. Open Cursor IDE +. Press `Cmd+L` (macOS) or `Ctrl+L` (Windows/Linux) to open the chat panel +. Type a simple question: "What does this function do?" (with a file open) +. Wait for response + +Then verify in the AI Gateway dashboard: + +. Open the Redpanda Cloud Console +. Navigate to your gateway's observability dashboard +. Filter by gateway ID +. Verify: +** Request appears in logs +** Model shows correct format (for example, `gpt-5.2`) +** Token usage and cost are recorded +** Request succeeded (status 200) + +If the request doesn't appear, see <>. + +=== Test inline code completion + +. Open a code file in Cursor +. Start typing a function definition +. Wait for inline suggestions to appear + +Autocomplete requests appear in the gateway dashboard with: + +* Lower token counts than chat requests +* Higher request frequency +* The autocomplete model you configured + +=== Test MCP tool integration + +If you configured MCP servers: + +. Open Cursor chat (`Cmd+L` or `Ctrl+L`) +. Ask a question that requires a tool: "What's the current date?" +. Cursor should: +** Discover available tools from the gateway +** Invoke the appropriate tool +** Return the result + +Check the gateway dashboard for MCP tool invocation logs. + +== Advanced configuration + +=== Custom request tracking headers + +Add custom headers for request tracking, user attribution, or routing policies: + +[,json] +---- +{ + "openai.additionalHeaders": { + "x-user-id": "developer-123", + "x-team": "backend", + "x-project": "api-service" + } +} +---- + +Use these headers with gateway CEL routing to: + +* Track costs per developer or team +* Route based on project type +* Apply different rate limits per user +* Generate team-specific usage reports + +=== Model-specific settings + +Configure different settings for chat vs autocomplete: + +[,json] +---- +{ + "cursor.chat.defaultModel": "claude-sonnet-4.5", + "cursor.cpp.defaultModel": "gpt-5.2-mini", + "cursor.chat.temperature": 0.7, + "cursor.cpp.temperature": 0.2, + "cursor.chat.maxTokens": 4096, + "cursor.cpp.maxTokens": 512 +} +---- + +Settings explained: + +* Chat uses Claude Sonnet for reasoning depth +* Autocomplete uses GPT-5.2-mini for speed and cost efficiency +* Chat temperature (0.7) allows creative responses +* Autocomplete temperature (0.2) produces deterministic code +* Chat allows longer responses (4096 tokens) +* Autocomplete limits responses (512 tokens) for speed + +=== Multi-tenancy with team-specific gateways + +For organizations with multiple teams sharing Cursor but requiring separate cost tracking and policies: + +[,json] +---- +{ + "cursor.overrideOpenAIBaseUrl": "${TEAM_GATEWAY_ENDPOINT}", + "cursor.overrideOpenAIApiKey": "${TEAM_API_KEY}", + "openai.additionalHeaders": { + "x-team": "${TEAM_NAME}" + } +} +---- + +Each team configures their own: + +* `TEAM_GATEWAY_ENDPOINT` - Gateway endpoint URL with team-specific gateway ID in the path +* `TEAM_API_KEY` - Team-specific API key +* `TEAM_NAME` - Identifier for usage reports + +This approach enables: + +* Per-team cost attribution +* Separate budgets and rate limits +* Team-specific model access policies +* Independent observability dashboards + +=== Request timeout configuration + +Configure timeout for LLM and MCP requests: + +[,json] +---- +{ + "cursor.requestTimeout": 30000, + "cursor.mcp.requestTimeout": 15000 +} +---- + +Timeout values are in milliseconds. Defaults: + +* LLM requests: 30000ms (30 seconds) +* MCP requests: 15000ms (15 seconds) + +Increase timeouts for: + +* Long-running MCP tools (database queries, web searches) +* High-latency network environments +* Complex reasoning tasks requiring extended processing + +=== Debug mode + +Enable debug logging to troubleshoot connection issues: + +[,json] +---- +{ + "cursor.debug": true, + "cursor.logLevel": "debug" +} +---- + +Debug mode shows: + +* HTTP request and response headers +* Model selection decisions +* Token usage calculations +* Error details with stack traces + +View debug logs: + +. Open Command Palette (`Cmd+Shift+P` or `Ctrl+Shift+P`) +. Type "Developer: Show Logs" +. Select "Extension Host" +. Filter by "cursor" + +[[troubleshooting]] +== Troubleshooting + +=== Cursor shows connection error + +**Symptom**: Cursor displays "Failed to connect to AI provider" or requests return errors. + +**Causes and solutions**: + +. **Incorrect base URL format** ++ +Verify the URL matches your gateway endpoint from the AI Gateway UI: ++ +[,text] +---- +# Correct - includes gateway ID in the path +"cursor.overrideOpenAIBaseUrl": "" + +# Incorrect - missing gateway path +"cursor.overrideOpenAIBaseUrl": "https://gw.ai.panda.com" +---- + +. **Authentication failure** ++ +Verify your API key is valid: ++ +[,bash] +---- +curl -H "Authorization: Bearer YOUR_API_KEY" \ + /models +---- ++ +You should receive a list of available models. If you get `401 Unauthorized`, regenerate your API key in the Redpanda Cloud Console. + +. **Gateway endpoint URL mismatch** ++ +Verify that `cursor.overrideOpenAIBaseUrl` matches the gateway endpoint URL from the AI Gateway UI exactly. The URL includes the gateway ID in the path. + +. **Invalid JSON syntax** ++ +Validate your `settings.json` file: ++ +[,bash] +---- +# macOS/Linux +python3 -m json.tool ~/Library/Application\ Support/Cursor/User/settings.json + +# Or use jq +jq . ~/Library/Application\ Support/Cursor/User/settings.json +---- ++ +Fix any syntax errors reported. + +=== Autocomplete not working + +**Symptom**: Inline autocomplete suggestions don't appear or are very slow. + +**Causes and solutions**: + +. **No autocomplete model configured** ++ +Verify `cursor.cpp.defaultModel` is set in `settings.json`: ++ +[,json] +---- +{ + "cursor.cpp.defaultModel": "gpt-5.2-mini" +} +---- + +. **Model too slow** ++ +Use a faster, cost-effective model for autocomplete: ++ +[,json] +---- +{ + "cursor.cpp.defaultModel": "gpt-5.2-mini", + "cursor.cpp.maxTokens": 256 +} +---- ++ +Smaller models like GPT-5.2-mini or Claude Haiku provide faster responses ideal for autocomplete. + +. **Network latency** ++ +Check gateway latency in the observability dashboard. If p95 latency is over 500ms, autocomplete will feel slow. Consider: ++ +* Using a gateway in a closer geographic region +* Switching to a faster model +* Reducing `cursor.cpp.maxTokens` to 256 or lower + +. **Autocomplete disabled in settings** ++ +Verify autocomplete is enabled: ++ +. Open Settings (`Cmd+,` or `Ctrl+,`) +. Search for "cursor autocomplete" +. Ensure "Enable Autocomplete" is checked + +=== MCP tools not appearing + +**Symptom**: Cursor doesn't show tools from MCP servers, or shows error "Too many tools". + +**Causes and solutions**: + +. **40-tool limit exceeded** ++ +Cursor has a hard limit of 40 MCP tools. If your MCP servers expose more than 40 tools combined, enable deferred tool loading in your AI Gateway configuration. ++ +With deferred loading, only 2 tools (search + orchestrator) are sent to Cursor initially, staying well under the limit. + +. **MCP configuration missing** ++ +Verify the `cursor.mcp.servers` section exists in `settings.json`: ++ +[,json] +---- +{ + "cursor.mcp": { + "servers": { + "redpanda-ai-gateway": { + "command": "node", + "args": [/* ... */] + } + } + } +} +---- + +. **No MCP servers in gateway** ++ +Verify your gateway has at least one MCP server configured in the AI Gateway UI. + +. **MCP endpoint unreachable** ++ +Test connectivity to the MCP endpoint: ++ +[,bash] +---- +curl -H "Authorization: Bearer YOUR_API_KEY" \ + /mcp +---- ++ +You should receive a valid MCP protocol response. + +. **Cursor restart needed** ++ +MCP configuration changes require restarting Cursor: ++ +. Close all Cursor windows +. Relaunch Cursor +. Wait for MCP servers to initialize (may take 5-10 seconds) + +=== Requests not appearing in gateway dashboard + +**Symptom**: Cursor works, but requests don't appear in the AI Gateway observability dashboard. + +**Causes and solutions**: + +. **Wrong gateway endpoint** ++ +Verify that `cursor.overrideOpenAIBaseUrl` points to the correct gateway endpoint URL. The gateway ID is embedded in the URL path, so using the wrong endpoint routes requests to a different gateway. + +. **Using direct provider connection** ++ +If `cursor.overrideOpenAIBaseUrl` points directly to a provider (for example, `https://api.openai.com`), requests won't route through the gateway. Verify it points to your gateway endpoint. + +. **Log ingestion delay** ++ +Gateway logs can take 5-10 seconds to appear in the dashboard. Wait briefly and refresh. + +. **Workspace settings override** ++ +Check if `.vscode/settings.json` in your project root overrides global settings with different gateway configuration. + +=== High latency after gateway integration + +**Symptom**: Requests are slower after routing through the gateway. + +**Causes and solutions**: + +. **Gateway geographic distance** ++ +If your gateway is in a different region than you or the upstream provider, this adds network latency. Check gateway region in the Redpanda Cloud Console. + +. **Provider pool failover** ++ +If your gateway is configured with fallback providers, check the logs to see if requests are failing over. Failover adds latency. + +. **Model mismatch** ++ +Verify you're using fast models for autocomplete: ++ +[,json] +---- +{ + "cursor.cpp.defaultModel": "gpt-5.2-mini" // Fast model +} +---- + +. **MCP tool aggregation overhead** ++ +Aggregating tools from multiple MCP servers adds processing time. Use deferred tool loading to reduce this overhead (see xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[]). + +=== Configuration changes not taking effect + +**Symptom**: Changes to `settings.json` don't apply. + +**Solutions**: + +. **Restart Cursor** ++ +Configuration changes require restarting Cursor: ++ +. Close all Cursor windows +. Relaunch Cursor + +. **Invalid JSON syntax** ++ +Validate JSON syntax: ++ +[,bash] +---- +python3 -m json.tool ~/Library/Application\ Support/Cursor/User/settings.json +---- + +. **Workspace settings overriding** ++ +Check if `.vscode/settings.json` in your project root overrides global settings. + +. **File permissions** ++ +Verify Cursor can read the configuration file: ++ +[,bash] +---- +# macOS +ls -la ~/Library/Application\ Support/Cursor/User/settings.json + +# Linux +ls -la ~/.config/Cursor/User/settings.json +---- ++ +Fix permissions if needed: ++ +[,bash] +---- +chmod 600 ~/Library/Application\ Support/Cursor/User/settings.json +---- + +== Cost optimization tips + +=== Use different models for chat and autocomplete + +Chat interactions benefit from reasoning depth, while autocomplete needs speed: + +[,json] +---- +{ + "cursor.chat.defaultModel": "claude-sonnet-4.5", + "cursor.cpp.defaultModel": "gpt-5.2-mini" +} +---- + +This can reduce costs by 5-10x for autocomplete while maintaining quality for chat. + +=== Limit token usage + +Reduce the maximum tokens for autocomplete to prevent runaway costs: + +[,json] +---- +{ + "cursor.cpp.maxTokens": 256, + "cursor.chat.maxTokens": 2048 +} +---- + +Autocomplete rarely needs more than 256 tokens, while chat responses can vary. + +=== Use MCP tools for documentation + +Instead of pasting large documentation into chat, create MCP tools that fetch relevant sections on-demand. This reduces token costs by including only needed information. + +=== Monitor usage patterns + +Use the AI Gateway dashboard to identify optimization opportunities: + +. Navigate to your gateway's observability dashboard +. Filter by Cursor requests (use custom header if configured) +. Analyze: +** Token usage per request type (chat vs autocomplete) +** Most expensive queries +** High-frequency low-value requests + +=== Team-based cost attribution + +Use custom headers to track costs per developer or team: + +[,json] +---- +{ + "openai.additionalHeaders": { + "x-user-id": "${USER_EMAIL}", + "x-team": "backend" + } +} +---- + +Generate team-specific cost reports from the gateway dashboard. + +=== Enable deferred MCP tool loading + +Configure deferred tool loading to reduce token costs by 80-90%: + +. Navigate to your gateway configuration +. Enable *Deferred Tool Loading* under MCP Settings +. Save configuration + +This sends only search + orchestrator tools initially, reducing token usage significantly. + +== Next steps + +* xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[]: Configure deferred tool loading to work within Cursor's 40-tool limit +* xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[]: Use CEL expressions to route Cursor requests based on context + +== Related pages + +* xref:ai-agents:ai-gateway/gateway-quickstart.adoc[]: Create and configure your AI Gateway +* xref:ai-agents:ai-gateway/gateway-architecture.adoc[]: Learn about AI Gateway architecture and benefits +* xref:ai-agents:ai-gateway/integrations/claude-code-user.adoc[]: Configure Claude Code with AI Gateway +* xref:ai-agents:ai-gateway/integrations/continue-user.adoc[]: Configure Continue.dev with AI Gateway +* xref:ai-agents:ai-gateway/integrations/cline-user.adoc[]: Configure Cline with AI Gateway diff --git a/modules/ai-agents/partials/integrations/github-copilot-admin.adoc b/modules/ai-agents/partials/integrations/github-copilot-admin.adoc new file mode 100644 index 000000000..80508dc0d --- /dev/null +++ b/modules/ai-agents/partials/integrations/github-copilot-admin.adoc @@ -0,0 +1,824 @@ += Configure AI Gateway for GitHub Copilot +:description: Configure Redpanda AI Gateway to support GitHub Copilot clients. +:page-topic-type: how-to +:personas: platform_admin +:learning-objective-1: Configure AI Gateway endpoints for GitHub Copilot connectivity +:learning-objective-2: Deploy multi-tenant authentication strategies for Copilot clients +:learning-objective-3: Set up model aliasing and BYOK routing for GitHub Copilot + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +Configure Redpanda AI Gateway to support GitHub Copilot clients accessing multiple LLM providers through OpenAI-compatible endpoints with bring-your-own-key (BYOK) support. + +After reading this page, you will be able to: + +* [ ] Configure AI Gateway endpoints for GitHub Copilot connectivity. +* [ ] Deploy multi-tenant authentication strategies for Copilot clients. +* [ ] Set up model aliasing and BYOK routing for GitHub Copilot. + +== Prerequisites + +* AI Gateway deployed on a BYOC cluster running Redpanda version 25.3 or later +* Administrator access to the AI Gateway UI +* API keys for at least one LLM provider (OpenAI, Anthropic, or others) +* Understanding of xref:ai-agents:ai-gateway/gateway-architecture.adoc[AI Gateway concepts] +* GitHub Copilot Business or Enterprise subscription (for BYOK and custom endpoints) + +== About GitHub Copilot + +GitHub Copilot is an AI-powered code completion tool that integrates with popular IDEs including VS Code, Visual Studio, JetBrains IDEs, and Neovim. GitHub Copilot uses OpenAI models by default but supports BYOK (bring your own key) configurations for Business and Enterprise customers. + +Key characteristics: + +* Sends all requests in OpenAI-compatible format to `/v1/chat/completions` +* Limited support for custom headers (similar to Cursor IDE) +* Supports BYOK for Business/Enterprise subscriptions +* Built-in code completion, chat, and inline editing modes +* Configuration via IDE settings or organization policies +* High request volume from code completion features + +== Architecture overview + +GitHub Copilot connects to AI Gateway through standardized endpoints: + +* LLM endpoint: `https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1/chat/completions` for all providers +* MCP endpoint support: Limited (GitHub Copilot does not natively support MCP protocol) + +The gateway handles: + +. Authentication via bearer tokens in the `Authorization` header +. Gateway selection via URL path routing or query parameters +. Model routing and aliasing for friendly names +. Format transforms from OpenAI format to provider-native formats +. Request logging and cost tracking per gateway +. BYOK routing for different teams or users + +== Enable LLM providers + +GitHub Copilot works with multiple providers through OpenAI-compatible transforms. Enable the providers your users will access. + +=== Configure OpenAI (default provider) + +GitHub Copilot uses OpenAI by default. To enable OpenAI through the gateway: + +. Navigate to *AI Gateway* > *Providers* in the Redpanda Cloud console +. Select *OpenAI* from the provider list +. Click *Add configuration* +. Enter your OpenAI API key +. Under *Format*, select *Native OpenAI* +. Click *Save* + +=== Configure Anthropic with OpenAI-compatible format + +For BYOK deployments, you can route GitHub Copilot to Anthropic models. Configure the gateway to transform requests: + +. Navigate to *AI Gateway* > *Providers* +. Select *Anthropic* from the provider list +. Click *Add configuration* +. Enter your Anthropic API key +. Under *Format*, select *OpenAI-compatible* (enables automatic transform) +. Click *Save* + +The gateway now transforms OpenAI-format requests to Anthropic's native `/v1/messages` format. + +=== Configure additional providers + +GitHub Copilot supports multiple providers through OpenAI-compatible transforms. For each provider: + +. Add the provider configuration in the gateway +. Set the format to *OpenAI-compatible* (the gateway handles format transformation) +. Enable the transform layer to convert OpenAI request format to the provider's native format + +Common additional providers: + +* Google Gemini (requires OpenAI-compatible transform) +* Mistral AI (already OpenAI-compatible format) +* Azure OpenAI (already OpenAI-compatible format) + +=== Enable models in the catalog + +After enabling providers, enable specific models: + +. Navigate to *AI Gateway* > *Models* +. Enable the models you want GitHub Copilot clients to access ++ +Common models for GitHub Copilot: ++ +* `gpt-5.2` (OpenAI) +* `gpt-5.2-mini` (OpenAI) +* `o1-mini` (OpenAI) +* `claude-sonnet-4.5` (Anthropic, requires alias) + +. Click *Save* + +GitHub Copilot typically uses model names without vendor prefixes. You'll configure model aliasing in the next section to map friendly names to provider-specific models. + +== Create a gateway for GitHub Copilot clients + +Create a dedicated gateway to isolate GitHub Copilot traffic and apply specific policies. + +=== Gateway configuration + +. Navigate to *AI Gateway* > *Gateways* +. Click *Create Gateway* +. Enter gateway details: ++ +[cols="1,2"] +|=== +|Field |Value + +|Name +|`github-copilot-gateway` (or your preferred name) + +|Workspace +|Select the workspace for access control grouping + +|Description +|Gateway for GitHub Copilot clients +|=== + +. Click *Create* +. Copy the gateway ID from the gateway details page + +The gateway ID is required for routing requests to this gateway. + +=== Configure model aliasing + +GitHub Copilot expects model names like `gpt-5.2` without vendor prefixes. Configure aliases to map these to provider-specific models: + +. Navigate to the gateway's *Models* tab +. Click *Add Model Alias* +. Configure aliases: ++ +[cols="1,2,1"] +|=== +|Alias Name |Target Model |Provider + +|`gpt-5.2` +|`openai/gpt-5.2` +|OpenAI + +|`gpt-5.2-mini` +|`openai/gpt-5.2-mini` +|OpenAI + +|`claude-sonnet` +|`anthropic/claude-sonnet-4.5` +|Anthropic + +|`o1-mini` +|`openai/o1-mini` +|OpenAI +|=== + +. Click *Save* + +When GitHub Copilot requests `gpt-5.2`, the gateway routes to OpenAI's `gpt-5.2` model. Users can optionally request `claude-sonnet` for Anthropic models if the IDE configuration supports model selection. + +=== Configure unified LLM routing + +GitHub Copilot sends all requests to a single endpoint (`/v1/chat/completions`). Configure the gateway to route based on the requested model name. + +==== Model-based routing + +Configure routing that inspects the model field to determine the target provider: + +. Navigate to the gateway's *LLM* tab +. Under *Routing*, click *Add route* +. Configure OpenAI routing: ++ +[source,cel] +---- +request.body.model.startsWith("gpt-") || request.body.model.startsWith("o1-") +---- + +. Add a *Primary provider pool*: ++ +* Provider: OpenAI +* Model: All enabled OpenAI models +* Transform: None (already OpenAI format) +* Load balancing: Round robin (if multiple OpenAI configurations exist) + +. Click *Save* +. Add another route for Anthropic models: ++ +[source,cel] +---- +request.body.model.startsWith("claude-") +---- + +. Add a *Primary provider pool*: ++ +* Provider: Anthropic +* Model: All enabled Anthropic models +* Transform: OpenAI to Anthropic + +. Click *Save* + +GitHub Copilot requests route to the appropriate provider based on the model alias. + +==== Default routing with fallback + +Configure a catch-all route for requests without specific model prefixes: + +[source,cel] +---- +true # Matches all requests not matched by previous routes +---- + +Add a primary provider (for example, OpenAI) with fallback to Anthropic: + +* Primary: OpenAI (for requests with no specific model) +* Fallback: Anthropic (if OpenAI is unavailable) +* Failover conditions: Rate limits, timeouts, 5xx errors + +=== Apply rate limits + +Prevent runaway usage from GitHub Copilot clients. Code completion features generate very high request volumes. + +. Navigate to the gateway's *LLM* tab +. Under *Rate Limit*, configure: ++ +[cols="1,2"] +|=== +|Setting |Recommended Value + +|Global rate limit +|300 requests per minute + +|Per-user rate limit +|30 requests per minute (if using user identification) +|=== + +. Click *Save* + +The gateway blocks requests exceeding these limits and returns HTTP 429 errors. + +==== Rate limit considerations for code completion + +GitHub Copilot's code completion feature generates extremely frequent requests (potentially dozens per minute per user). Consider: + +* Higher global rate limits than other AI coding assistants +* Separate rate limits for different request types if the gateway supports request classification +* Monitoring initial usage patterns to adjust limits appropriately + +=== Set spending limits + +Control LLM costs across all providers: + +. Under *Spend Limit*, configure: ++ +[cols="1,2"] +|=== +|Setting |Value + +|Monthly budget +|$10,000 (adjust based on expected usage) + +|Enforcement +|Block requests after budget exceeded + +|Alert threshold +|80% of budget (sends notification) +|=== + +. Click *Save* + +The gateway tracks estimated costs per request across all providers and blocks traffic when the monthly budget is exhausted. + +== Configure authentication + +GitHub Copilot clients authenticate using bearer tokens in the `Authorization` header. + +=== Generate API tokens + +. Navigate to *Security* > *API Tokens* in the Redpanda Cloud console +. Click *Create Token* +. Enter token details: ++ +[cols="1,2"] +|=== +|Field |Value + +|Name +|`copilot-access` + +|Scopes +|`ai-gateway:read`, `ai-gateway:write` + +|Expiration +|Set appropriate expiration based on security policies +|=== + +. Click *Create* +. Copy the token (it appears only once) + +Distribute this token to GitHub Copilot administrators through secure channels for organization-level configuration. + +=== Token rotation + +Implement token rotation for security: + +. Create a new token before the existing token expires +. Update organization-level GitHub Copilot configuration with the new token +. Monitor usage of the old token in (observability dashboard) +. Revoke the old token after the configuration update propagates + +== Multi-tenant deployment strategies + +GitHub Copilot has limited support for custom headers. The gateway ID is now embedded in the URL path, simplifying multi-tenancy. Use one of these strategies for BYOK deployments. + +=== Strategy 1: OAI Compatible Provider extension (recommended) + +For organizations using VS Code with GitHub Copilot, the OAI Compatible Provider extension enables custom headers for additional metadata. + +==== Install the extension + +. Navigate to VS Code Extensions Marketplace +. Search for "OAI Compatible Provider" +. Install the extension +. Restart VS Code + +==== Configure the extension + +. Open VS Code settings (JSON) +. Add gateway configuration: ++ +[source,json] +---- +{ + "oai-compatible-provider.providers": [ + { + "name": "Redpanda AI Gateway", + "baseUrl": "https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1", + "headers": { + "Authorization": "Bearer YOUR_API_TOKEN" + }, + "models": [ + "gpt-5.2", + "gpt-5.2-mini", + "claude-sonnet" + ] + } + ] +} +---- + +. Replace: ++ +* `{CLUSTER_ID}`: Your Redpanda cluster ID +* `YOUR_API_TOKEN`: Team-specific API token + +This approach allows true multi-tenancy with proper gateway isolation per team. + +**Benefits:** + +* Clean separation between tenants +* Standard authentication flow +* Works with any IDE supported by the extension + +**Limitations:** + +* Requires VS Code and extension installation +* Not available for all GitHub Copilot-supported IDEs +* Users must configure extension in addition to GitHub Copilot + +=== Strategy 2: Query parameter routing + +Embed tenant identity in query parameters for multi-tenant routing without custom headers. + +. Configure gateway routing to extract tenant from query parameters: ++ +[source,cel] +---- +request.url.query["tenant"][0] == "team-alpha" +---- + +. Distribute tenant-specific endpoints to each team +. Configure GitHub Copilot organization settings with the tenant-specific base URL + +**Configuration example for Team Alpha:** + +Organization-level GitHub Copilot settings: + +[source,json] +---- +{ + "copilot": { + "api_base_url": "https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1?tenant=team-alpha", + "api_key": "TEAM_ALPHA_TOKEN" + } +} +---- + +**Benefits:** + +* Works with standard GitHub Copilot configuration +* No additional extensions required +* Simple to implement + +**Limitations:** + +* Tenant identity exposed in URLs and logs +* Less clean than header-based routing +* URL parameters may be logged by intermediate proxies + +=== Strategy 3: Token-based gateway mapping + +Use different API tokens to identify which gateway to route to: + +. Generate separate API tokens for each tenant or team +. Tag tokens with metadata indicating the target gateway +. Configure gateway routing based on token identity: ++ +[source,cel] +---- +request.auth.metadata["gateway_id"] == "team-alpha-gateway" +---- + +. Apply tenant-specific routing, rate limits, and spending limits based on the token + +**Benefits:** + +* Transparent to users +* No URL modifications needed +* Centralized control through token management + +**Limitations:** + +* Requires gateway support for token metadata inspection +* Token management overhead increases with number of tenants +* All tenants use the same base URL + +=== Strategy 4: Single-tenant mode + +For simpler deployments, configure a single gateway with shared access: + +. Create one gateway for all GitHub Copilot users +. Generate a shared API token +. Configure GitHub Copilot at the organization level +. Use rate limits and spending limits to control overall usage + +**Benefits:** + +* Simplest configuration +* No tenant routing complexity +* Easy to manage + +**Limitations:** + +* No per-team cost tracking or limits +* Shared rate limits may impact individual teams +* All users have the same model access + +=== Choosing a multi-tenant strategy + +[cols="1,2,2,1"] +|=== +|Strategy |Pros |Cons |Best For + +|OAI Compatible Provider +|Clean tenant separation, custom headers +|Requires extension, VS Code only +|Organizations standardized on VS Code + +|Query parameters +|No extensions needed, simple setup +|Tenant exposed in URLs, less clean +|Quick deployments, small teams + +|Token-based +|Transparent to users, centralized control +|Requires advanced gateway features +|Large organizations with many teams + +|Single-tenant +|Simplest configuration and management +|No per-team isolation or limits +|Small organizations, proof of concept +|=== + +== Configure GitHub Copilot clients + +Provide these instructions based on your chosen multi-tenant strategy. + +=== Organization-level configuration (GitHub Enterprise) + +For GitHub Enterprise customers, configure Copilot at the organization level: + +. Navigate to your organization settings on GitHub +. Go to *Copilot* > *Policies* +. Enable *Allow use of Copilot with custom models* +. Configure the custom endpoint: ++ +[source,json] +---- +{ + "api_base_url": "https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1", + "api_key": "YOUR_API_TOKEN" +} +---- + +. If using query parameter routing, append the tenant identifier: ++ +[source,json] +---- +{ + "api_base_url": "https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1?tenant=YOUR_TEAM", + "api_key": "YOUR_API_TOKEN" +} +---- + +This configuration applies to all users in the organization. + +=== IDE-specific configuration (individual users) + +For individual users or when organization-level configuration is not available: + +==== VS Code configuration + +. Open VS Code settings +. Search for "GitHub Copilot" +. Configure custom endpoint (if using OAI Compatible Provider): ++ +[source,json] +---- +{ + "github.copilot.advanced": { + "endpoint": "https://{CLUSTER_ID}.cloud.redpanda.com/ai-gateway/v1" + } +} +---- + +==== JetBrains IDEs + +. Open IDE Settings +. Navigate to *Tools* > *GitHub Copilot* +. Configure custom endpoint (support varies by IDE and Copilot version) + +==== Neovim + +. Edit Copilot configuration +. Add custom endpoint in the Copilot.vim or Copilot.lua configuration +. Refer to the Copilot.vim documentation for exact syntax + +=== Model selection + +Configure model preferences based on use case: + +[cols="1,2,1"] +|=== +|Use Case |Recommended Model |Reason + +|Code completion +|`gpt-5.2-mini` +|Fast, cost-effective for frequent requests + +|Code explanation +|`gpt-5.2` or `claude-sonnet` +|Higher quality for complex explanations + +|Code generation +|`gpt-5.2` or `claude-sonnet` +|Better at generating complete functions + +|Documentation +|`gpt-5.2-mini` +|Sufficient quality for docstrings and comments +|=== + +Model selection is typically configured at the organization level or through IDE settings. + +== Monitor GitHub Copilot usage + +Track GitHub Copilot activity through gateway observability features. + +=== View request logs + +. Navigate to *AI Gateway* > *Observability* > *Logs* +. Filter by gateway ID: `github-copilot-gateway` +. Review: ++ +* Request timestamps and duration +* Model used per request (including aliases) +* Token usage (prompt and completion tokens) +* Estimated cost per request +* HTTP status codes and errors +* Transform operations (OpenAI to provider-native format) + +GitHub Copilot generates distinct request patterns: + +* Code completion: Very high volume, short requests with low token counts +* Chat/explain: Medium volume, longer requests with code context +* Code generation: Lower volume, variable length requests + +=== Analyze metrics + +. Navigate to *AI Gateway* > *Observability* > *Metrics* +. Select the GitHub Copilot gateway +. Review: ++ +[cols="1,2"] +|=== +|Metric |Purpose + +|Request volume by model +|Identify most-used models via aliases + +|Token usage by model +|Track consumption patterns (completion vs chat) + +|Estimated spend by provider +|Monitor costs across providers with transforms + +|Latency (p50, p95, p99) +|Detect transform overhead and performance issues + +|Error rate by provider +|Identify failing providers or transform issues + +|Transform success rate +|Monitor OpenAI-to-provider format conversion success + +|Requests per user/tenant +|Track usage by team (if using multi-tenant strategies) +|=== + + +=== Query logs via API + +Programmatically access logs for integration with monitoring systems: + +[source,bash] +---- +curl https://{CLUSTER_ID}.cloud.redpanda.com/api/ai-gateway/logs \ + -H "Authorization: Bearer YOUR_API_TOKEN" \ + -H "Content-Type: application/json" \ + -d '{ + "gateway_id": "GATEWAY_ID", + "start_time": "2026-01-01T00:00:00Z", + "end_time": "2026-01-14T23:59:59Z", + "limit": 100 + }' +---- + +== Security considerations + +Apply these security best practices for GitHub Copilot deployments. + +=== Limit token scope + +Create tokens with minimal required scopes: + +* `ai-gateway:read`: Required for model discovery +* `ai-gateway:write`: Required for LLM requests + +Avoid granting broader scopes like `admin` or `cluster:write`. + +=== Implement network restrictions + +If GitHub Copilot clients connect from known networks, configure network policies: + +. Use cloud provider security groups to restrict access to AI Gateway endpoints +. Allowlist only the IP ranges where GitHub Copilot clients operate +. Monitor for unauthorized access attempts in request logs + +=== Enforce token expiration + +Set short token lifetimes for high-security environments: + +* Development environments: 90 days +* Production environments: 30 days + +Automate token rotation to reduce manual overhead. Coordinate with GitHub organization administrators when rotating tokens. + +=== Monitor transform operations + +Because GitHub Copilot may route to non-OpenAI providers through transforms: + +. Review transform success rates in metrics +. Monitor for transform failures that may leak request details +. Test transforms thoroughly before production deployment +. Keep transform logic updated as provider APIs evolve + +=== Audit model access + +Review which models GitHub Copilot clients can access: + +. Periodically audit enabled models and aliases +. Remove deprecated or unused model configurations +. Monitor model usage logs for unexpected patterns +. Ensure cost-effective models are used for high-volume completion requests + +=== Code completion security + +GitHub Copilot sends code context to LLM providers. Ensure: + +* Users understand what code context is sent with requests +* Proprietary code may be included in prompts +* Configure organization policies to limit code sharing if needed +* Review provider data retention policies +* Monitor logs for sensitive information in prompts (if logging includes prompt content) + +=== Organization-level controls + +For GitHub Enterprise customers: + +. Use organization-level policies to enforce custom endpoint usage +. Restrict which users can configure custom endpoints +. Monitor organization audit logs for configuration changes +. Implement approval workflows for endpoint changes + +== Troubleshooting + +Common issues and solutions when configuring AI Gateway for GitHub Copilot. + +=== GitHub Copilot cannot connect to gateway + +Symptom: Connection errors when GitHub Copilot tries to send requests. + +Causes and solutions: + +* **Invalid base URL**: Verify the configured endpoint matches the gateway URL (including query parameters if using query-based routing) +* **Expired token**: Generate a new API token and update the GitHub Copilot configuration +* **Network connectivity**: Verify the cluster endpoint is accessible from the client network +* **Provider not enabled**: Ensure at least one provider is enabled and has models in the catalog +* **SSL/TLS issues**: Verify the cluster has valid SSL certificates +* **Organization policy blocking custom endpoints**: Check GitHub organization settings + +=== Model not found errors + +Symptom: GitHub Copilot shows "model not found" or similar errors. + +Causes and solutions: + +* **Model not enabled in catalog**: Enable the model in the gateway's model catalog +* **Model alias missing**: Create an alias for the model name GitHub Copilot expects (for example, `gpt-5.2`) +* **Incorrect model name**: Verify GitHub Copilot is requesting a model name that exists in your aliases +* **Routing rule mismatch**: Check that routing rules correctly match the requested model name + +=== Transform errors or unexpected responses + +Symptom: Responses are malformed or GitHub Copilot reports format errors. + +Causes and solutions: + +* **Transform disabled**: Ensure OpenAI-compatible transform is enabled for non-OpenAI providers (for example, Anthropic) +* **Transform version mismatch**: Verify the transform is compatible with the current provider API version +* **Model-specific transform issues**: Some models may require specific transform configurations +* **Check transform logs**: Review logs for transform errors and stack traces +* **Response format incompatibility**: Verify the provider's response can be transformed to OpenAI format + +=== High costs or token usage + +Symptom: Token usage and costs exceed expectations. + +Causes and solutions: + +* **Code completion using expensive model**: Configure completion to use `gpt-5.2-mini` instead of larger models +* **No rate limits**: Apply per-minute rate limits to prevent runaway usage +* **Missing spending limits**: Set monthly budget limits with blocking enforcement +* **Chat using wrong model**: Ensure chat/explanation features use cost-effective models +* **Transform overhead**: Monitor if transforms add significant token overhead +* **High completion request volume**: Expected behavior, adjust budgets or implement stricter rate limits + +=== Requests failing with 429 errors + +Symptom: GitHub Copilot receives HTTP 429 Too Many Requests errors. + +Causes and solutions: + +* **Rate limit exceeded**: Review and increase rate limits if usage is legitimate (code completion needs very high limits) +* **Upstream provider rate limits**: Check if the upstream LLM provider is rate-limiting; configure failover to alternate providers +* **Budget exhausted**: Verify monthly spending limit has not been reached +* **Per-user limits too restrictive**: Adjust per-user rate limits if using multi-tenant strategies +* **Spike in usage**: Code completion can generate sudden usage spikes, consider burstable rate limits + +=== Multi-tenant routing failures + +Symptom: Requests route to wrong gateway or fail authorization. + +Causes and solutions: + +* **Query parameter missing**: Ensure query parameter is appended to all requests if using query-based routing +* **Token metadata incorrect**: Verify token is tagged with correct gateway metadata +* **Routing rule conflicts**: Check for overlapping routing rules that may cause unexpected routing +* **Organization policy override**: Verify GitHub organization settings aren't overriding user configurations +* **Extension not configured**: If using OAI Compatible Provider extension, verify proper installation and configuration + +=== Performance issues + +Symptom: Slow response times from GitHub Copilot. + +Causes and solutions: + +* **Transform latency**: Monitor metrics for transform processing time overhead +* **Provider latency**: Check latency metrics by provider to identify slow backends +* **Network latency**: Verify cluster is in a region with good connectivity to users +* **Cold start delays**: Some providers may have cold start latency on first request +* **Rate limiting overhead**: Check if rate limit enforcement is adding latency + +== Next steps + +* xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[]: Implement advanced routing rules for model aliasing + diff --git a/modules/ai-agents/partials/integrations/github-copilot-user.adoc b/modules/ai-agents/partials/integrations/github-copilot-user.adoc new file mode 100644 index 000000000..81798998d --- /dev/null +++ b/modules/ai-agents/partials/integrations/github-copilot-user.adoc @@ -0,0 +1,916 @@ += Configure GitHub Copilot with AI Gateway +:description: Configure GitHub Copilot to use Redpanda AI Gateway for unified LLM access and custom provider management. +:page-topic-type: how-to +:personas: ai_agent_developer, app_developer +:learning-objective-1: Configure GitHub Copilot in VS Code and JetBrains IDEs to route requests through AI Gateway +:learning-objective-2: Set up multi-tenancy with gateway routing for cost tracking +:learning-objective-3: Configure enterprise BYOK deployments for team-wide Copilot access + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +After xref:ai-agents:ai-gateway/gateway-quickstart.adoc[configuring your AI Gateway], set up GitHub Copilot to route LLM requests through the gateway for centralized observability, cost management, and provider flexibility. + +After reading this page, you will be able to: + +* [ ] Configure GitHub Copilot in VS Code and JetBrains IDEs to route requests through AI Gateway. +* [ ] Set up multi-tenancy with gateway routing for cost tracking. +* [ ] Configure enterprise BYOK deployments for team-wide Copilot access. + +== Prerequisites + +Before configuring GitHub Copilot, ensure you have: + +* GitHub Copilot subscription (Individual, Business, or Enterprise) +* An active Redpanda AI Gateway with: +** At least one LLM provider enabled (see xref:ai-agents:ai-gateway/gateway-quickstart.adoc#step-1-enable-a-provider[Enable a provider]) +** A gateway created and configured (see xref:ai-agents:ai-gateway/gateway-quickstart.adoc#step-3-create-a-gateway[Create a gateway]) +* Your AI Gateway credentials: +** Gateway endpoint URL (for example, `https://gw.ai.panda.com`) +** Gateway ID (for example, `gateway-abc123`) +** API key with access to the gateway +* Your IDE: +** VS Code with GitHub Copilot extension installed +** Or JetBrains IDE (IntelliJ IDEA, PyCharm, etc.) with GitHub Copilot plugin + +== About GitHub Copilot and AI Gateway + +GitHub Copilot provides AI-powered code completion and chat within your IDE. By default, Copilot routes requests directly to GitHub's infrastructure, which uses OpenAI and other LLM providers. + +When you route Copilot through AI Gateway, you gain: + +* Centralized observability across all Copilot usage +* Cost attribution per developer, team, or project +* Provider flexibility (use your own API keys or alternative models) +* Policy enforcement (rate limits, spend controls) +* Multi-tenancy support for enterprise deployments + +== Configuration approaches + +GitHub Copilot supports different configuration approaches depending on your IDE and subscription tier: + +[cols="1,2,2,1"] +|=== +|IDE |Method |Subscription Tier |Complexity + +|VS Code +|Custom OpenAI models +|Individual, Business, Enterprise +|Medium + +|VS Code +|OAI Compatible Provider extension +|Individual, Business, Enterprise +|Low + +|JetBrains +|Enterprise BYOK +|Enterprise +|Low +|=== + +Choose the approach that matches your environment. VS Code users have multiple options, while JetBrains users need GitHub Copilot Enterprise with BYOK support. + +== Configure in VS Code + +VS Code offers two approaches for routing Copilot through AI Gateway: + +. Custom OpenAI models (manual configuration) +. OAI Compatible Provider extension (simplified) + +=== Option 1: Custom OpenAI models + +This approach configures VS Code to recognize your AI Gateway as a custom OpenAI-compatible provider. + +==== Configure custom models + +. Open VS Code Settings: +** macOS: `Cmd+,` +** Windows/Linux: `Ctrl+,` +. Search for `github.copilot.chat.customOAIModels` +. Click *Edit in settings.json* +. Add the following configuration: + +[,json] +---- +{ + "github.copilot.chat.customOAIModels": [ + { + "id": "anthropic/claude-sonnet-4.5", + "name": "Claude Sonnet 4.5 (Gateway)", + "endpoint": "https://gw.ai.panda.com/v1", + "provider": "redpanda-gateway" + }, + { + "id": "openai/gpt-5.2", + "name": "GPT-5.2 (Gateway)", + "endpoint": "https://gw.ai.panda.com/v1", + "provider": "redpanda-gateway" + } + ] +} +---- + +Replace `https://gw.ai.panda.com/v1` with your gateway endpoint. + +IMPORTANT: This experimental feature requires configuring API keys and custom headers through the Copilot Chat UI, not in `settings.json`. + +==== Configure API key and headers via Copilot Chat UI + +. Open Copilot Chat in VS Code (`Cmd+I` or `Ctrl+I`) +. Click the model selector dropdown +. Click *Manage Models* at the bottom of the dropdown +. Click *Add Model* +. Select your configured provider ("redpanda-gateway") +. Enter the connection details: +** *Base URL*: `https://gw.ai.panda.com/v1` (should match your settings.json endpoint) +** *API Key*: Your Redpanda API key +. Click *Save* + +==== Select model + +. Open Copilot chat with `Cmd+I` (macOS) or `Ctrl+I` (Windows/Linux) +. Click the model selector dropdown +. Choose a model from the "redpanda-gateway" provider + +=== Option 2: OAI Compatible Provider extension + +The OAI Compatible Provider extension provides enhanced support for OpenAI-compatible endpoints with custom headers. + +==== Install extension + +. Open VS Code Extensions (`Cmd+Shift+X` or `Ctrl+Shift+X`) +. Search for "OAI Compatible Provider" +. Click *Install* + +==== Configure base URL in settings + +Add the base URL configuration in VS Code settings: + +. Open VS Code Settings (`Cmd+,` or `Ctrl+,`) +. Search for `oaicopilot` +. Click *Edit in settings.json* +. Add the following: + +[,json] +---- +{ + "oaicopilot.baseUrl": "https://gw.ai.panda.com/v1", + "oaicopilot.models": [ + "anthropic/claude-sonnet-4.5", + "openai/gpt-5.2", + "openai/gpt-5.2-mini" + ] +} +---- + +Replace `https://gw.ai.panda.com/v1` with your gateway endpoint. + +==== Configure API key and headers via Copilot Chat UI + +IMPORTANT: Do not configure API keys or custom headers in `settings.json`. Use the Copilot Chat UI instead. + +. Open Copilot Chat in VS Code (`Cmd+I` or `Ctrl+I`) +. Click the model selector dropdown +. Click *Manage Models* +. Find the OAI Compatible Provider in the list +. Click *Configure* or *Edit* +. Enter the connection details: +** *API Key*: Your Redpanda API key +. Click *Save* + +==== Select model + +. Open Copilot chat with `Cmd+I` (macOS) or `Ctrl+I` (Windows/Linux) +. Click the model selector dropdown +. Choose a model from the OAI Compatible Provider + +== Configure in JetBrains IDEs + +JetBrains IDE integration requires GitHub Copilot Enterprise with Bring Your Own Key (BYOK) support. + +=== Prerequisites + +* GitHub Copilot Enterprise subscription +* BYOK enabled for your organization +* JetBrains IDE 2024.1 or later +* GitHub Copilot plugin version 1.4.0 or later + +=== Configure BYOK with AI Gateway + +. Open your JetBrains IDE (IntelliJ IDEA, PyCharm, etc.) +. Navigate to *Settings/Preferences*: +** macOS: `Cmd+,` +** Windows/Linux: `Ctrl+Alt+S` +. Go to *Tools* > *GitHub Copilot* +. Under *Advanced Settings*, find *Custom Model Configuration* +. Configure the OpenAI-compatible endpoint: + +[,text] +---- +Base URL: https://gw.ai.panda.com/v1 +API Key: your-redpanda-api-key +---- + +Replace placeholder values: + +* `https://gw.ai.panda.com/v1` - Your gateway endpoint +* `your-redpanda-api-key` - Your Redpanda API key + +=== Configure model selection + +In the GitHub Copilot settings: + +. Expand *Model Selection* +. Choose your preferred models from the AI Gateway: +** Chat model: `anthropic/claude-sonnet-4.5` or `openai/gpt-5.2` +** Code completion model: `openai/gpt-5.2-mini` (faster, cost-effective) + +Model format uses `vendor/model_id` pattern to route through the gateway to the appropriate provider. + +=== Test configuration + +. Open a code file +. Trigger code completion (start typing) +. Or open Copilot chat: +** Right-click > *Copilot* > *Open Chat* +** Or use shortcut: `Cmd+Shift+C` (macOS) or `Ctrl+Shift+C` (Windows/Linux) +. Verify suggestions appear + +Check the AI Gateway dashboard to confirm requests are logged. + +== Multi-tenancy configuration + +For organizations with multiple teams or projects sharing AI Gateway, use separate gateways to track usage per team. + +=== Approach 1: One gateway per team + +Create separate gateways for each team: + +* Team A Gateway: ID `team-a-gateway-123` +* Team B Gateway: ID `team-b-gateway-456` + +Each team configures their IDE with their team's gateway endpoint URL, which includes the gateway ID in the path. + +Benefits: + +* Isolated cost tracking per team +* Team-specific rate limits and budgets +* Separate observability dashboards + +=== Approach 2: Shared gateway with custom headers + +Use a single gateway with custom headers for attribution: + +[,json] +---- +{ + "oai.provider.headers": { + "x-team": "backend-team", + "x-project": "api-service" + } +} +---- + +Configure gateway CEL routing to read these headers: + +[,cel] +---- +request.headers["x-team"] == "backend-team" ? "openai/gpt-5.2" : "openai/gpt-5.2-mini" +---- + +Benefits: + +* Single gateway to manage +* Flexible cost attribution +* Header-based routing policies + +Filter observability dashboard by `x-team` or `x-project` headers to generate team-specific reports. + +=== Approach 3: Environment-based gateways + +Separate development, staging, and production environments: + +[,json] +---- +{ + "oai.provider.headers": { + "x-environment": "${env:ENVIRONMENT}" + } +} +---- + +Set environment variables per workspace: + +[,bash] +---- +# Development workspace +export ENVIRONMENT="development" + +# Production workspace +export ENVIRONMENT="production" +---- + +Benefits: + +* Prevent development usage from affecting production metrics +* Different rate limits and budgets per environment +* Environment-specific model access policies + +== Enterprise BYOK at scale + +For large organizations deploying GitHub Copilot Enterprise with AI Gateway across hundreds or thousands of developers. + +=== Centralized configuration management + +Distribute IDE configuration files via: + +* **Git repository**: Store `settings.json` or IDE configuration in a shared repository +* **Configuration management tools**: Puppet, Chef, Ansible +* **Group Policy** (Windows environments) +* **MDM solutions** (macOS environments) + +Example centralized configuration: + +[,json] +---- +{ + "oai.provider.endpoint": "https://gw.company.com/v1", + "oai.provider.apiKey": "${env:COPILOT_GATEWAY_KEY}", + "oai.provider.headers": { + "x-user-email": "${env:USER_EMAIL}", + "x-department": "${env:DEPARTMENT}" + } +} +---- + +Developers set environment variables locally or receive them from identity management systems. + +=== API key management + +**Option 1: Individual API keys** + +Each developer gets their own Redpanda API key: + +* Tied to their identity (email, employee ID) +* Revocable when they leave the organization +* Enables per-developer cost attribution + +**Option 2: Team API keys** + +Teams share API keys: + +* Simpler key management +* Cost attribution by team, not individual +* Use custom headers for finer-grained tracking + +**Option 3: Service account keys** + +Single key for all developers: + +* Simplest to deploy +* No per-developer tracking +* Use custom headers for all attribution + +=== Automated provisioning workflow + +. Developer joins organization +. Identity system (Okta, Azure AD, etc.) triggers provisioning: +.. Create Redpanda API key +.. Assign to appropriate gateway +.. Generate IDE configuration file with embedded keys +.. Distribute to developer workstation +. Developer installs IDE and GitHub Copilot +. Configuration auto-applies (via MDM or configuration management) +. Developer starts using Copilot immediately + +=== Observability and governance + +Track usage across the organization: + +. Navigate to AI Gateway dashboard +. Filter by custom headers: +** `x-department`: View costs per department +** `x-user-email`: Track usage per developer +** `x-project`: Attribute costs to specific projects +. Generate reports: +** Top 10 users by token usage +** Departments exceeding budget +** Projects using deprecated models +. Set alerts: +** Individual developer exceeds threshold (potential misuse) +** Department budget approaching limit +** Unusual request patterns (security concern) + +=== Policy enforcement + +Use gateway CEL routing to enforce policies: + +[,cel] +---- +// Limit junior developers to cost-effective models +request.headers["x-user-level"] == "junior" + ? "openai/gpt-5.2-mini" + : "anthropic/claude-sonnet-4.5" + +// Block access for contractors to expensive models +request.headers["x-user-type"] == "contractor" && +request.model.contains("opus") + ? error("Contractors cannot use Opus models") + : request.model +---- + +== Verify configuration + +After configuring GitHub Copilot, verify it routes requests through your AI Gateway. + +=== Test code completion + +. Open a code file in your IDE +. Start typing a function definition +. Wait for code completion suggestions to appear + +Completion requests appear in the gateway dashboard with: + +* Low token counts (typically 50-200 tokens) +* High request frequency (as you type) +* The completion model you configured + +=== Test chat interface + +. Open Copilot chat: +** VS Code: `Cmd+I` (macOS) or `Ctrl+I` (Windows/Linux) +** JetBrains: Right-click > *Copilot* > *Open Chat* +. Ask a simple question: "Explain this function" +. Wait for response + +Chat requests appear in the gateway dashboard with: + +* Higher token counts (500-2000 tokens typical) +* The chat model you configured +* Response status (200 for success) + +=== Verify in dashboard + +. Open the Redpanda Cloud Console +. Navigate to your gateway's observability dashboard +. Filter by gateway ID +. Verify: +** Requests appear in logs +** Models show correct format (for example, `anthropic/claude-sonnet-4.5`) +** Token usage and cost are recorded +** Custom headers appear (if configured) + +If requests don't appear, see <>. + +== Advanced configuration + +=== Model-specific settings + +Configure different models for different tasks: + +[,json] +---- +{ + "oai.provider.models": [ + { + "id": "anthropic/claude-sonnet-4.5", + "name": "Claude Sonnet (chat)", + "type": "chat", + "temperature": 0.7, + "maxTokens": 4096 + }, + { + "id": "openai/gpt-5.2-mini", + "name": "GPT-5.2 Mini (completion)", + "type": "completion", + "temperature": 0.2, + "maxTokens": 512 + } + ] +} +---- + +Settings explained: + +* Chat uses Claude Sonnet with higher temperature for creative responses +* Completion uses GPT-5.2 Mini with lower temperature for deterministic code +* Chat allows longer responses, completion limits tokens for speed + +=== Workspace-specific configuration + +Override global settings for specific projects using workspace settings. + +In VS Code, create `.vscode/settings.json` in your project root: + +[,json] +---- +{ + "oai.provider.headers": { + "x-project": "customer-portal" + } +} +---- + +Benefits: + +* Route different projects through different gateways +* Track costs per project +* Use different models per project (cost-effective for internal, premium for customer-facing) + +=== Custom request timeouts + +Configure timeout for AI Gateway requests: + +[,json] +---- +{ + "oai.provider.timeout": 30000 +} +---- + +Timeout is in milliseconds. Default is typically 30000 (30 seconds). + +Increase timeouts for: + +* High-latency network environments +* Complex code generation tasks +* Large file context + +=== Debug mode + +Enable debug logging to troubleshoot issues: + +[,json] +---- +{ + "oai.provider.debug": true, + "github.copilot.advanced": { + "debug": true + } +} +---- + +View debug logs: + +* VS Code: Developer Console (`Help` > `Toggle Developer Tools` > `Console` tab) +* JetBrains: `Help` > `Diagnostic Tools` > `Debug Log Settings` > Add `github.copilot` + +Debug mode shows: + +* HTTP request and response headers +* Model selection decisions +* Token usage calculations +* Error details + +[[troubleshooting]] +== Troubleshooting + +=== Copilot shows no suggestions + +**Symptom**: Code completion doesn't work or Copilot shows "No suggestions available". + +**Causes and solutions**: + +. **Configuration not loaded** ++ +Reload your IDE to apply configuration changes: ++ +* VS Code: Command Palette > "Developer: Reload Window" +* JetBrains: File > Invalidate Caches / Restart + +. **Incorrect endpoint URL** ++ +Verify the URL format includes `/v1` at the end: ++ +[,text] +---- +# Correct +https://gw.ai.panda.com/v1 + +# Incorrect +https://gw.ai.panda.com +---- + +. **Authentication failure** ++ +Verify your API key is valid: ++ +[,bash] +---- +curl -H "Authorization: Bearer YOUR_API_KEY" \ + https://gw.ai.panda.com/v1/models +---- ++ +You should receive a list of available models. If you get `401 Unauthorized`, regenerate your API key in the Redpanda Cloud Console. + +. **Extension/plugin disabled** ++ +Verify GitHub Copilot is enabled: ++ +* VS Code: Extensions view > GitHub Copilot > Ensure "Enabled" +* JetBrains: Settings > Plugins > GitHub Copilot > Check "Enabled" + +. **Network connectivity issues** ++ +Test connectivity to the gateway: ++ +[,bash] +---- +curl -I https://gw.ai.panda.com/v1 +---- ++ +If this times out, check your network configuration, firewall rules, or VPN connection. + +=== Requests not appearing in gateway dashboard + +**Symptom**: Copilot works, but requests don't appear in the AI Gateway observability dashboard. + +**Causes and solutions**: + +. **Wrong gateway ID** ++ +Verify the gateway ID in your endpoint URL matches the gateway you're viewing in the dashboard (case-sensitive). + +. **Using direct GitHub connection** ++ +If the endpoint configuration is missing or incorrect, Copilot may route directly to GitHub instead of your gateway. Verify endpoint configuration. + +. **Log ingestion delay** ++ +Gateway logs can take 5-10 seconds to appear in the dashboard. Wait briefly and refresh. + +. **Environment variable not set** ++ +If using environment variables like `${env:REDPANDA_API_KEY}`, verify they're set before launching the IDE: ++ +[,bash] +---- +echo $REDPANDA_API_KEY # Should print your API key +---- + +=== High latency or slow suggestions + +**Symptom**: Code completion is slow or chat responses take a long time. + +**Causes and solutions**: + +. **Gateway geographic distance** ++ +If your gateway is in a different region than you or the upstream provider, this adds network latency. Check gateway region in the Redpanda Cloud Console. + +. **Slow model for completion** ++ +Use a faster model for code completion: ++ +[,json] +---- +{ + "oai.provider.models": [ + { + "id": "openai/gpt-5.2-mini", + "type": "completion" + } + ] +} +---- ++ +Models like GPT-5.2 Mini or Claude Haiku provide faster responses ideal for code completion. + +. **Provider pool failover** ++ +If your gateway is configured with fallback providers, check the logs to see if requests are failing over. Failover adds latency. + +. **Rate limiting** ++ +If you're hitting rate limits, the gateway may be queuing requests. Check the observability dashboard for rate limit metrics. + +. **Token limit too high** ++ +Reduce `maxTokens` for completion models to improve speed: ++ +[,json] +---- +{ + "oai.provider.models": [ + { + "id": "openai/gpt-5.2-mini", + "type": "completion", + "maxTokens": 256 + } + ] +} +---- + +=== Custom headers not being sent + +**Symptom**: Custom headers (like `x-team` or `x-project`) don't appear in gateway logs. + +**Causes and solutions**: + +. **Extension not installed (VS Code)** ++ +Custom headers require the OAI Compatible Provider extension in VS Code. Install it from the Extensions marketplace. + +. **Header configuration location** ++ +Ensure headers are in the correct configuration section: ++ +[,json] +---- +{ + "oai.provider.headers": { + "x-custom": "value" + } +} +---- ++ +Not: ++ +[,json] +---- +{ + "github.copilot.advanced": { + "headers": { // Wrong location + "x-custom": "value" + } + } +} +---- + +. **Environment variable not expanded** ++ +If using `${env:VAR_NAME}` syntax, verify the environment variable is set before launching the IDE. + +=== Model not recognized + +**Symptom**: Error message "Model not found" or "Invalid model ID". + +**Causes and solutions**: + +. **Incorrect model format** ++ +Ensure model names use the `vendor/model_id` format: ++ +[,text] +---- +# Correct +anthropic/claude-sonnet-4.5 +openai/gpt-5.2 + +# Incorrect +claude-sonnet-4.5 +gpt-5.2 +---- + +. **Model not enabled in gateway** ++ +Verify the model is enabled in your AI Gateway configuration: ++ +.. Open Redpanda Cloud Console +.. Navigate to your gateway +.. Check enabled providers and models + +. **Typo in model ID** ++ +Double-check the model ID matches exactly (case-sensitive). Copy from the AI Gateway UI rather than typing manually. + +=== Configuration changes not taking effect + +**Symptom**: Changes to settings don't apply. + +**Solutions**: + +. **Reload IDE** ++ +Configuration changes require reloading: ++ +* VS Code: Command Palette > "Developer: Reload Window" +* JetBrains: File > Invalidate Caches / Restart + +. **Invalid JSON syntax** ++ +Validate your `settings.json` file: ++ +[,bash] +---- +python3 -m json.tool ~/.config/Code/User/settings.json +---- ++ +Fix any syntax errors reported. + +. **Workspace settings override** ++ +Check if `.vscode/settings.json` in your project root overrides global settings. Workspace settings take precedence over global settings. + +. **File permissions** ++ +Verify the IDE can read the configuration file: ++ +[,bash] +---- +ls -la ~/.config/Code/User/settings.json +---- ++ +Fix permissions if needed: ++ +[,bash] +---- +chmod 600 ~/.config/Code/User/settings.json +---- + +== Cost optimization tips + +=== Use different models for chat and completion + +Code completion needs speed, while chat benefits from reasoning depth: + +[,json] +---- +{ + "oai.provider.models": [ + { + "id": "anthropic/claude-sonnet-4.5", + "type": "chat" + }, + { + "id": "openai/gpt-5.2-mini", + "type": "completion" + } + ] +} +---- + +This can reduce costs by 5-10x for code completion while maintaining chat quality. + +=== Limit token usage + +Reduce maximum tokens for completion to prevent runaway costs: + +[,json] +---- +{ + "oai.provider.models": [ + { + "id": "openai/gpt-5.2-mini", + "type": "completion", + "maxTokens": 256 + } + ] +} +---- + +Code completion rarely needs more than 256 tokens. + +=== Monitor usage patterns + +Use the AI Gateway dashboard to identify optimization opportunities: + +. Navigate to your gateway's observability dashboard +. Filter by custom headers (for example, `x-team`, `x-user-email`) +. Analyze: +** Token usage per developer or team +** Most expensive queries +** High-frequency low-value requests + +=== Set team-based budgets + +Use separate gateways or CEL routing to enforce team budgets: + +[,cel] +---- +// Limit team to 1 million tokens per month +request.headers["x-team"] == "frontend" && +monthly_tokens > 1000000 + ? error("Team budget exceeded") + : request.model +---- + +Configure alerts in the dashboard when teams approach their limits. + +=== Track costs per project + +Use custom headers to attribute costs: + +[,json] +---- +{ + "oai.provider.headers": { + "x-project": "mobile-app" + } +} +---- + +Generate project-specific cost reports from the gateway dashboard. + +== Next steps + +* xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[]: Use CEL expressions to route Copilot requests based on context +* xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[]: Learn about MCP tool integration (if using Copilot Workspace) + +== Related pages + +* xref:ai-agents:ai-gateway/gateway-quickstart.adoc[]: Create and configure your AI Gateway +* xref:ai-agents:ai-gateway/gateway-architecture.adoc[]: Learn about AI Gateway architecture and benefits +* xref:ai-agents:ai-gateway/integrations/claude-code-user.adoc[]: Configure Claude Code with AI Gateway +* xref:ai-agents:ai-gateway/integrations/continue-user.adoc[]: Configure Continue.dev with AI Gateway +* xref:ai-agents:ai-gateway/integrations/cursor-user.adoc[]: Configure Cursor IDE with AI Gateway diff --git a/modules/ai-agents/partials/integrations/index.adoc b/modules/ai-agents/partials/integrations/index.adoc new file mode 100644 index 000000000..bf8c6966c --- /dev/null +++ b/modules/ai-agents/partials/integrations/index.adoc @@ -0,0 +1,5 @@ += AI Gateway Integrations +:description: Configure AI development tools and IDEs to connect to Redpanda AI Gateway for centralized LLM routing and MCP tool aggregation. +:page-layout: index + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] diff --git a/modules/ai-agents/partials/migration-guide.adoc b/modules/ai-agents/partials/migration-guide.adoc new file mode 100644 index 000000000..6684d4275 --- /dev/null +++ b/modules/ai-agents/partials/migration-guide.adoc @@ -0,0 +1,879 @@ += Migrate to AI Gateway +:description: Step-by-step migration guide to transition existing applications from direct LLM provider integrations to Redpanda AI Gateway with minimal disruption. +:page-topic-type: how-to +:personas: app_developer, platform_admin +:learning-objective-1: Migrate LLM integrations to AI Gateway with zero downtime using feature flags +:learning-objective-2: Verify gateway connectivity and compare performance metrics +:learning-objective-3: Roll back to direct integration if issues arise during migration + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +This guide helps you migrate existing applications from direct LLM provider integrations (OpenAI, Anthropic, and others) to Redpanda AI Gateway. Design the migration to be incremental and reversible, allowing you to test thoroughly before fully committing. + +**Downtime required:** None (supports parallel operation) + +**Rollback difficulty:** Easy (feature flag or environment variable) + +== Prerequisites + +Before migrating, ensure you have: + +* AI Gateway configured in your Redpanda Cloud account +* Enabled providers and models in AI Gateway +* Created gateway with appropriate policies +* Your gateway endpoint URL (with the gateway ID embedded in the path) + +Verify your gateway is reachable: + +[source,bash] +---- +curl /models \ + -H "Authorization: Bearer {YOUR_TOKEN}" +---- + +Expected output: List of enabled models + +== Migration strategy + +=== Recommended approach: Parallel operation + +Run both direct and gateway-routed requests simultaneously to validate behavior before full cutover. + +[source,text] +---- +┌─────────────────┐ +│ Application │ +└────────┬────────┘ + │ + ┌────▼─────┐ + │ Feature │ + │ Flag │ + └────┬─────┘ + │ + ┌────▼──────────────┐ + │ │ +┌───▼─────┐ ┌─────▼─────┐ +│ Direct │ │ Gateway │ +│Provider │ │ Route │ +└─────────┘ └───────────┘ +---- + + +Benefits: + +* No downtime +* Easy rollback +* Compare results side-by-side +* Gradual traffic shift + +== Step-by-step migration + +=== Add environment variables + +Add gateway configuration to your environment without removing existing provider keys (yet). + +*.env (or equivalent)* +[source,bash] +---- +# Existing (keep these for now) +OPENAI_API_KEY=sk-... +ANTHROPIC_API_KEY=sk-ant-... + +# New gateway configuration +REDPANDA_AI_GATEWAY_URL= +REDPANDA_AI_GATEWAY_TOKEN={YOUR_TOKEN} + +# Feature flag (start with gateway disabled) +USE_AI_GATEWAY=false +---- + + +=== Update your code + +==== Option A: OpenAI SDK (recommended for most use cases) + +Before (Direct OpenAI) + +[source,python] +---- +from openai import OpenAI + +client = OpenAI( + api_key=os.getenv("OPENAI_API_KEY") +) + +response = client.chat.completions.create( + model="gpt-5.2", + messages=[{"role": "user", "content": "Hello"}] +) +---- + + +After (Gateway-routed with feature flag) + +[source,python] +---- +from openai import OpenAI +import os + +# Feature flag determines which client to use +use_gateway = os.getenv("USE_AI_GATEWAY", "false").lower() == "true" + +if use_gateway: + client = OpenAI( + base_url=os.getenv("REDPANDA_AI_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_AI_GATEWAY_TOKEN"), + ) + model = "openai/gpt-5.2" # Add vendor prefix +else: + client = OpenAI( + api_key=os.getenv("OPENAI_API_KEY") + ) + model = "gpt-5.2" # Original model name + +response = client.chat.completions.create( + model=model, + messages=[{"role": "user", "content": "Hello"}] +) +---- + + +Better: Abstraction function + +[source,python] +---- +from openai import OpenAI +import os + +def get_llm_client(): + """Returns configured OpenAI client (direct or gateway-routed)""" + use_gateway = os.getenv("USE_AI_GATEWAY", "false").lower() == "true" + + if use_gateway: + return OpenAI( + base_url=os.getenv("REDPANDA_AI_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_AI_GATEWAY_TOKEN"), + ) + else: + return OpenAI(api_key=os.getenv("OPENAI_API_KEY")) + +def get_model_name(base_model: str) -> str: + """Returns model name with vendor prefix if using gateway""" + use_gateway = os.getenv("USE_AI_GATEWAY", "false").lower() == "true" + return f"openai/{base_model}" if use_gateway else base_model + +# Usage +client = get_llm_client() +response = client.chat.completions.create( + model=get_model_name("gpt-5.2"), + messages=[{"role": "user", "content": "Hello"}] +) +---- + + +==== Option B: Anthropic SDK + +Before (Direct Anthropic) + +[source,python] +---- +from anthropic import Anthropic + +client = Anthropic( + api_key=os.getenv("ANTHROPIC_API_KEY") +) + +response = client.messages.create( + model="claude-sonnet-4.5", + max_tokens=1024, + messages=[{"role": "user", "content": "Hello"}] +) +---- + + +After (Gateway via OpenAI-compatible wrapper) + +Because AI Gateway provides an OpenAI-compatible endpoint, we recommend migrating Anthropic SDK usage to OpenAI SDK for consistency: + +[source,python] +---- +from openai import OpenAI +import os + +use_gateway = os.getenv("USE_AI_GATEWAY", "false").lower() == "true" + +if use_gateway: + # Use OpenAI SDK with gateway + client = OpenAI( + base_url=os.getenv("REDPANDA_AI_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_AI_GATEWAY_TOKEN"), + ) + + response = client.chat.completions.create( + model="anthropic/claude-sonnet-4.5", + max_tokens=1024, + messages=[{"role": "user", "content": "Hello"}] + ) +else: + # Keep existing Anthropic SDK + from anthropic import Anthropic + client = Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY")) + + response = client.messages.create( + model="claude-sonnet-4.5", + max_tokens=1024, + messages=[{"role": "user", "content": "Hello"}] + ) +---- + + +Alternative: Use OpenAI client for OpenAI-compatible gateway + +[source,python] +---- +from openai import OpenAI + +use_gateway = os.getenv("USE_AI_GATEWAY", "false").lower() == "true" + +if use_gateway: + client = OpenAI( + base_url=os.getenv("REDPANDA_AI_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_AI_GATEWAY_TOKEN"), + ) +else: + from anthropic import Anthropic + client = Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY")) +---- + + +==== Option C: Multiple providers + +Before (Separate SDKs) + +[source,python] +---- +from openai import OpenAI +from anthropic import Anthropic + +openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY")) +anthropic_client = Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY")) + +# Different code paths +if use_openai: + response = openai_client.chat.completions.create(...) +else: + response = anthropic_client.messages.create(...) +---- + + +After (Unified via Gateway) + +[source,python] +---- +from openai import OpenAI + +# Single client for all providers +client = OpenAI( + base_url=os.getenv("REDPANDA_AI_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_AI_GATEWAY_TOKEN"), +) + +# Same code, different models +if use_openai: + response = client.chat.completions.create( + model="openai/gpt-5.2", + messages=[...] + ) +else: + response = client.chat.completions.create( + model="anthropic/claude-sonnet-4.5", + messages=[...] + ) +---- + + +=== Test gateway connection + +Before changing the feature flag, verify gateway connectivity: + +Python Test Script + +[source,python] +---- +from openai import OpenAI +import os + +def test_gateway_connection(): + client = OpenAI( + base_url=os.getenv("REDPANDA_AI_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_AI_GATEWAY_TOKEN"), + ) + + try: + response = client.chat.completions.create( + model="openai/gpt-5.2-mini", # Use cheap model for testing + messages=[{"role": "user", "content": "Test"}], + max_tokens=10 + ) + print("✅ Gateway connection successful") + print(f"Response: {response.choices[0].message.content}") + return True + except Exception as e: + print(f"❌ Gateway connection failed: {e}") + return False + +if __name__ == "__main__": + test_gateway_connection() +---- + + +Expected output: + +[source,text] +---- +Gateway connection successful +Response: Hello +---- + + +Common issues: + +* `401 Unauthorized` → Check `REDPANDA_AI_GATEWAY_TOKEN` +* `404 Not Found` → Check `REDPANDA_AI_GATEWAY_URL` (should end with `/v1/chat/completions` or base path) +* `Model not found` → Ensure model is enabled in gateway configuration +* `Model not found` → Ensure model is enabled in gateway configuration + +=== Verify in observability dashboard + +After successful test: + +1. Open AI Gateway observability dashboard +2. In the sidebar, navigate to *Agentic AI > Gateways > {GATEWAY_NAME}*, then select the *Logs* tab. +3. Verify your test request appears +4. Check fields: + * Model: `openai/gpt-5.2-mini` + * Provider: OpenAI + * Status: 200 + * Token count: ~10 prompt + ~10 completion + * Cost: // PLACEHOLDER: expected cost + +*If request doesn't appear*: Verify gateway ID and authentication token are correct. + +=== Enable gateway for subset of traffic + +Gradually roll out gateway usage: + +Staged rollout strategy: + +1. *Week 1*: Internal testing only (dev team accounts) +2. *Week 2*: 10% of production traffic +3. *Week 3*: 50% of production traffic +4. *Week 4*: 100% of production traffic + +Implementation options: + +Option A: Environment-based + +[source,python] +---- +# Enable gateway in staging first +use_gateway = os.getenv("ENVIRONMENT") in ["staging", "production"] +---- + + +Option B: Percentage-based + +[source,python] +---- +import random + +# Route 10% of traffic through gateway +use_gateway = random.random() < 0.10 +---- + + +Option C: User-based + +[source,python] +---- +# Enable for internal users first +use_gateway = user.email.endswith("@yourcompany.com") +---- + + +Option D: Feature flag service (recommended) + +[source,python] +---- +# LaunchDarkly, Split.io, etc. +use_gateway = feature_flags.is_enabled("ai-gateway", user_context) +---- + + +=== Monitor and compare + +During parallel operation, compare metrics: + +Metrics to monitor: + +[cols="2,1,1,3"] +|=== +| Metric | Direct | Gateway | Notes + +| Success rate +| // track +| // track +| Should be identical + +| Latency p50 +| // track +| // track +| Gateway adds ~// PLACEHOLDER: Xms + +| Latency p99 +| // track +| // track +| Watch for outliers + +| Error rate +| // track +| // track +| Should be identical + +| Cost per 1K requests +| // track +| // track +| Compare estimated costs +|=== + +Monitoring code example: + +[source,python] +---- +import time + +def call_llm_with_metrics(use_gateway: bool, model: str, messages: list): + start_time = time.time() + + try: + client = get_llm_client(use_gateway) + response = client.chat.completions.create( + model=model, + messages=messages + ) + + latency = time.time() - start_time + + # Log metrics + metrics.record("llm.request.success", 1, tags={ + "routing": "gateway" if use_gateway else "direct", + "model": model + }) + metrics.record("llm.request.latency", latency, tags={ + "routing": "gateway" if use_gateway else "direct" + }) + + return response + + except Exception as e: + metrics.record("llm.request.error", 1, tags={ + "routing": "gateway" if use_gateway else "direct", + "error": str(e) + }) + raise +---- + + +=== Full cutover + +Once metrics confirm gateway reliability: + +1. Set feature flag to 100%: ++ +[source,bash] +---- +USE_AI_GATEWAY=true +---- + +2. Deploy updated configuration + +3. Monitor for 24-48 hours + +4. Remove direct provider credentials (optional, for security): ++ +[source,bash] +---- +# .env +# OPENAI_API_KEY=sk-... # Remove after confirming gateway stability +# ANTHROPIC_API_KEY=sk-ant-... # Remove after confirming gateway stability + +REDPANDA_AI_GATEWAY_URL= +REDPANDA_AI_GATEWAY_TOKEN={YOUR_TOKEN} +---- + +5. Remove direct integration code (optional, for cleanup): ++ +[source,python] +---- +# Remove feature flag logic, keep only gateway path +client = OpenAI( + base_url=os.getenv("REDPANDA_AI_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_AI_GATEWAY_TOKEN"), +) +---- + +== Rollback procedure + +If issues arise, rollback is simple: + +Emergency rollback (< 1 minute): + +[source,bash] +---- +# Set feature flag back to false +USE_AI_GATEWAY=false + +# Restart application (if needed) +---- + + +Gradual rollback: + +[source,python] +---- +# Reduce gateway traffic percentage +use_gateway = random.random() < 0.50 # Back to 50% +use_gateway = random.random() < 0.10 # Back to 10% +use_gateway = False # Back to 0% +---- + + +*Keep direct provider credentials until you're confident in gateway stability.* + +== Framework-specific migration + +[tabs] +====== +LangChain:: ++ +-- +Before + +[source,python] +---- +from langchain_openai import ChatOpenAI + +llm = ChatOpenAI( + model="gpt-5.2", + api_key=os.getenv("OPENAI_API_KEY") +) +---- + +After + +[source,python] +---- +from langchain_openai import ChatOpenAI + +use_gateway = os.getenv("USE_AI_GATEWAY", "false").lower() == "true" + +if use_gateway: + llm = ChatOpenAI( + model="openai/gpt-5.2", + base_url=os.getenv("REDPANDA_AI_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_AI_GATEWAY_TOKEN"), + ) +else: + llm = ChatOpenAI( + model="gpt-5.2", + api_key=os.getenv("OPENAI_API_KEY") + ) +---- +-- + +LlamaIndex:: ++ +-- +Before + +[source,python] +---- +from llama_index.llms.openai import OpenAI + +llm = OpenAI(model="gpt-5.2") +---- + +After + +[source,python] +---- +from llama_index.llms.openai import OpenAI + +use_gateway = os.getenv("USE_AI_GATEWAY", "false").lower() == "true" + +if use_gateway: + llm = OpenAI( + model="openai/gpt-5.2", + api_base=os.getenv("REDPANDA_AI_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_AI_GATEWAY_TOKEN"), + ) +else: + llm = OpenAI(model="gpt-5.2") +---- +-- + +Vercel AI SDK:: ++ +-- +Before + +[source,typescript] +---- +import { openai } from '@ai-sdk/openai'; + +const model = openai('gpt-5.2'); +---- + +After + +[source,typescript] +---- +import { createOpenAI } from '@ai-sdk/openai'; +import { openai } from '@ai-sdk/openai'; + +const useGateway = process.env.USE_AI_GATEWAY === 'true'; + +const model = useGateway + ? createOpenAI({ + baseURL: process.env.REDPANDA_AI_GATEWAY_URL, + apiKey: process.env.REDPANDA_AI_GATEWAY_TOKEN, + })('openai/gpt-5.2') + : openai('gpt-5.2'); +---- +-- +====== + +== Migration checklist + +Use this checklist to track your migration: + +*Prerequisites* + + * [ ] Gateway configured and tested + * [ ] Providers enabled + * [ ] Models enabled + * [ ] Gateway ID and endpoint URL obtained + +*Code Changes* + + * [ ] Environment variables added + * [ ] Feature flag implemented + * [ ] Client initialization updated + * [ ] Model name prefix added (vendor/model_id) + * [ ] Authentication configured (API token) + +*Testing* + + * [ ] Gateway connection test passes + * [ ] Test request visible in observability dashboard + * [ ] Integration tests pass with gateway + * [ ] End-to-end tests pass with gateway + +*Staged rollout* + + * [ ] Week 1: Internal testing (dev team only) + * [ ] Week 2: 10% production traffic + * [ ] Week 3: 50% production traffic + * [ ] Week 4: 100% production traffic + +*Monitoring* + + * [ ] Success rate comparison (direct vs gateway) + * [ ] Latency comparison (direct vs gateway) + * [ ] Error rate comparison (direct vs gateway) + * [ ] Cost comparison (direct vs gateway) + +*Cleanup* (optional, after 30 days stable) + + * [ ] Remove direct provider credentials + * [ ] Remove feature flag logic + * [ ] Update documentation + * [ ] Archive direct integration code + +== Common migration issues + +=== "Model not found" error + +Symptom: +[source,text] +---- +Error: Model 'openai/gpt-5.2' not found +---- + + +Causes: + +1. Model not enabled in gateway configuration +2. Wrong model name format (missing vendor prefix) +3. Typo in model name + +Solution: + +1. Verify model is enabled: In the sidebar, navigate to *Agentic AI > Models* and confirm the model is enabled. +2. Confirm format: `vendor/model_id` (for example, `openai/gpt-5.2`, not `gpt-5.2` without prefix) +3. Check supported models: // PLACEHOLDER: link to model catalog + +=== Higher latency than expected + +Expected gateway overhead: // PLACEHOLDER: Xms p50, Yms p99 + +If latency is significantly higher: + +1. Check geographic routing (gateway → provider region) +2. Verify provider pool configuration (no unnecessary fallbacks) +3. Review CEL routing complexity +4. Check for rate limiting (adds retry latency) + +Solution: Review geographic routing and provider pool configuration. + +=== Requests not appearing in dashboard + +Causes: + +1. Wrong gateway ID +2. Request failed before reaching gateway +3. UI delay (logs may take // PLACEHOLDER: Xs to appear) + +Solution: Verify gateway ID and check for UI delay (logs may take a few seconds to appear). + +=== Different response format + +Symptom: Response structure differs between direct and gateway + +AI Gateway returns OpenAI-compatible responses for all providers. Anthropic responses are automatically transformed to match the OpenAI response format. If you encounter differences, file a support ticket with the request ID from logs. + +== Advanced migration scenarios + +=== Custom request timeouts + +Before + +[source,python] +---- +client = OpenAI(api_key=..., timeout=30.0) +---- + + +After + +[source,python] +---- +client = OpenAI( + base_url=os.getenv("REDPANDA_AI_GATEWAY_URL"), + api_key=os.getenv("REDPANDA_AI_GATEWAY_TOKEN"), + timeout=30.0 # Still supported +) +---- + + +=== Streaming responses + +Before + +[source,python] +---- +stream = client.chat.completions.create( + model="gpt-5.2", + messages=[...], + stream=True +) + +for chunk in stream: + print(chunk.choices[0].delta.content, end="") +---- + + +After + +[source,python] +---- +stream = client.chat.completions.create( + model="openai/gpt-5.2", # Add vendor prefix + messages=[...], + stream=True +) + +for chunk in stream: + print(chunk.choices[0].delta.content, end="") +---- + + +=== Custom headers (for example, user tracking) + +Before + +[source,python] +---- +response = client.chat.completions.create( + model="gpt-5.2", + messages=[...], + extra_headers={"X-User-ID": user.id} +) +---- + + +After + +[source,python] +---- +response = client.chat.completions.create( + model="openai/gpt-5.2", + messages=[...], + extra_headers={ + "X-User-ID": user.id, # Custom headers still supported + } +) +---- + + +NOTE: Gateway may use custom headers for routing (for example, CEL expressions can reference `request.headers["X-User-ID"]`) + +== Post-migration benefits + +After successful migration, you gain: + +Simplified provider management + +[source,python] +---- +# Switch providers with one config change (no code changes) +model = "anthropic/claude-sonnet-4.5" # Was openai/gpt-5.2 +---- + +Unified observability + +* All requests in one dashboard +* Cross-provider cost comparison +* Session reconstruction across models + +Automatic failover + +* Configure once, benefit everywhere +* No application-level retry logic needed + +Cost controls + +* Enforce budgets centrally +* Rate limit per team/customer +* No surprises in cloud bills + +A/B testing + +* Test new models without code changes +* Compare quality/cost/latency +* Gradual rollout via routing policies + +== Next steps + +* xref:ai-agents:ai-gateway/cel-routing-cookbook.adoc[]: Configure advanced routing policies. +* xref:ai-agents:ai-gateway/mcp-aggregation-guide.adoc[]: Explore MCP aggregation. diff --git a/modules/ai-agents/partials/observability-logs.adoc b/modules/ai-agents/partials/observability-logs.adoc new file mode 100644 index 000000000..09702bc7b --- /dev/null +++ b/modules/ai-agents/partials/observability-logs.adoc @@ -0,0 +1,772 @@ += Observability: Logs +:description: Guide to AI Gateway request logs, including where to find logs, log fields, filtering, searching, inspecting requests, common analysis tasks, log retention, export options, privacy/security, and troubleshooting. +:page-topic-type: reference +:personas: platform_admin, app_developer +:learning-objective-1: Locate and filter request logs to debug failures or reconstruct conversations +:learning-objective-2: Interpret log fields to diagnose performance and cost issues +:learning-objective-3: Export logs for compliance auditing or long-term analysis + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +AI Gateway logs every LLM request that passes through it, capturing the full request/response history, token usage, cost, latency, and routing decisions. This page explains how to find, filter, and interpret request logs. + +== Before you begin + +* You have an active AI Gateway with at least one request processed. +* You have access to the Redpanda Cloud Console. +* You have the appropriate permissions to view gateway logs. + +Use logs for: + +* Debugging specific failed requests +* Reconstructing user conversation sessions +* Auditing what prompts were sent and responses received +* Understanding which provider handled a request +* Investigating latency spikes or errors for specific users + +Use metrics for: Aggregate analytics, trends, cost tracking across time. See xref:ai-agents:ai-gateway/observability-metrics.adoc[]. + +== Where to find logs + +1. Navigate to logs view: + * In the sidebar, navigate to *Agentic AI > Gateways > {gateway-name}*, then select the *Logs* tab. + * Or: Gateway detail page -> Logs tab + +2. Select gateway: + * Filter by specific gateway, or view all gateways + * // PLACEHOLDER: screenshot of gateway selector + +3. Set time range: + * Default: Last 1 hour + * Options: Last 5 minutes, 1 hour, 24 hours, 7 days, 30 days, Custom + * // PLACEHOLDER: screenshot of time range picker + +== Request log fields + +Each log entry contains: + +=== Core request info + +[cols="1,2,2"] +|=== +| Field | Description | Example + +| *Request ID* +| Unique identifier for this request +| `req_abc123...` + +| *Timestamp* +| When request was received (UTC) +| `2025-01-11T14:32:10.123Z` + +| *Gateway ID* +| Which gateway handled this request +| `gw_abc123...` + +| *Gateway Name* +| Human-readable gateway name +| `production-gateway` + +| *Status* +| HTTP status code +| `200`, `400`, `429`, `500` + +| *Latency* +| Total request duration (ms) +| `1250ms` +|=== + +=== Model and provider info + +[cols="1,2,2"] +|=== +| Field | Description | Example + +| *Requested Model* +| Model specified in request +| `openai/gpt-5.2` + +| *Actual Model* +| Model that handled request (may differ due to routing) +| `anthropic/claude-sonnet-4.5` + +| *Provider* +| Which provider handled the request +| `OpenAI`, `Anthropic` + +| *Provider Pool* +| Pool used (primary/fallback) +| `primary`, `fallback` + +| *Fallback Triggered* +| Whether fallback was used +| `true`/`false` + +| *Fallback Reason* +| Why fallback occurred +| `rate_limit`, `timeout`, `5xx_error` +|=== + +=== Token and cost info + +[cols="1,2,2"] +|=== +| Field | Description | Example + +| *Prompt Tokens* +| Input tokens consumed +| `523` + +| *Completion Tokens* +| Output tokens generated +| `187` + +| *Total Tokens* +| Prompt + completion +| `710` + +| *Estimated Cost* +| Calculated cost for this request +| `$0.0142` + +| *Cost Breakdown* +| Per-token costs +| `Prompt: $0.005, Completion: $0.0092` +|=== + +=== Request content (expandable) + +[cols="1,2,2"] +|=== +| Field | Description | Notes + +| *Request Headers* +| All headers sent +| Includes authorization and custom headers + +| *Request Body* +| Full request payload +| Includes messages, parameters + +| *Response Headers* +| Headers returned +| // PLACEHOLDER: Any gateway-specific headers? + +| *Response Body* +| Full response payload +| Includes message content, metadata +|=== + +=== Routing and policy info + +[cols="1,2,2"] +|=== +| Field | Description | Example + +| *CEL Expression* +| Routing rule applied (if any) +| `request.headers["tier"] == "premium" ? ...` + +| *CEL Result* +| Model selected by CEL +| `openai/gpt-5.2` + +| *Rate Limit Status* +| Whether rate limited +| `allowed`, `throttled`, `blocked` + +| *Spend Limit Status* +| Whether budget exceeded +| `allowed`, `blocked` + +| *Policy Stage* +| Where request was processed/blocked +| `rate_limit`, `routing`, `execution` +|=== + +=== Error info (if applicable) + +[cols="1,2,2"] +|=== +| Field | Description | Example + +| *Error Code* +| Gateway or provider error code +| `RATE_LIMIT_EXCEEDED`, `MODEL_NOT_FOUND` + +| *Error Message* +| Human-readable error +| `Request rate limit exceeded for gateway` + +| *Provider Error* +| Upstream provider error +| `OpenAI API returned 429: Rate limit exceeded` +|=== + +== Filter logs + +=== By gateway + +// PLACEHOLDER: Screenshot of gateway filter dropdown + +[source,text] +---- +Filter: Gateway = "production-gateway" +---- + + +Shows only requests for the selected gateway. + +Use case: Isolate production traffic from staging + +=== By model + +// PLACEHOLDER: Screenshot of model filter + +[source,text] +---- +Filter: Model = "openai/gpt-5.2" +---- + + +Shows only requests for specific model. + +Use case: Compare quality/cost between models + +=== By provider + +[source,text] +---- +Filter: Provider = "OpenAI" +---- + + +Shows only requests handled by specific provider. + +Use case: Investigate provider-specific issues + +=== By status + +[source,text] +---- +Filter: Status = "429" +---- + + +Shows only requests with specific HTTP status. + +Common filters: + +* `200`: Successful requests +* `400`: Bad requests (client errors) +* `401`: Authentication errors +* `429`: Rate limited requests +* `500`: Server errors +* `5xx`: All server errors + +Use case: Find all failed requests + +=== By time range + +[source,text] +---- +Filter: Timestamp >= "2025-01-11T14:00:00Z" AND Timestamp <= "2025-01-11T15:00:00Z" +---- + + +Use case: Investigate incident during specific time window + +=== By custom header + +[source,text] +---- +Filter: request.headers["x-user-id"] = "user_123" +---- + + +Shows only requests for specific user. + +Use case: Debug user-reported issue + +=== By token range + +[source,text] +---- +Filter: Total Tokens > 10000 +---- + + +Shows only high-token requests. + +Use case: Find expensive requests + +=== By latency + +[source,text] +---- +Filter: Latency > 5000ms +---- + + +Shows only slow requests. + +Use case: Investigate performance issues + +=== Combined filters + +[source,text] +---- +Gateway = "production-gateway" +AND Status >= 500 +AND Timestamp >= "last 24 hours" +---- + + +Shows production server errors in last 24 hours. + +// PLACEHOLDER: Screenshot of multiple filters applied + +== Search logs + +=== Full-text search (if supported) + +// PLACEHOLDER: Confirm if full-text search is available + +[source,text] +---- +Search: "specific error message" +---- + + +Searches across all text fields (error messages, request/response content). + +=== Search by request content + +[source,text] +---- +Search in Request Body: "user's actual question" +---- + + +Find requests containing specific prompt text. + +Use case: "A user said the AI gave a wrong answer about X" → Search for "X" in prompts + +=== Search by response content + +[source,text] +---- +Search in Response Body: "specific AI response phrase" +---- + + +Find responses containing specific text. + +Use case: Find all requests where AI mentioned a competitor name + +== Inspect individual requests + +Click any log entry to expand full details. + +// PLACEHOLDER: Screenshot of expanded log entry + +=== Request details tab + +Shows: + +* Full request headers +* Full request body (formatted JSON) +* All parameters (temperature, max_tokens, etc.) +* Custom headers used for routing + +Example: + +[source,json] +---- +{ + "model": "openai/gpt-5.2", + "messages": [ + { + "role": "system", + "content": "You are a helpful assistant." + }, + { + "role": "user", + "content": "What is Redpanda?" + } + ], + "temperature": 0.7, + "max_tokens": 500 +} +---- + + +=== Response details tab + +Shows: + +* Full response headers +* Full response body (formatted JSON) +* Finish reason (`stop`, `length`, `content_filter`) +* Response metadata + +Example: + +[source,json] +---- +{ + "id": "chatcmpl-...", + "choices": [ + { + "message": { + "role": "assistant", + "content": "Redpanda is a streaming data platform..." + }, + "finish_reason": "stop" + } + ], + "usage": { + "prompt_tokens": 24, + "completion_tokens": 87, + "total_tokens": 111 + } +} +---- + + +=== Routing details tab + +Shows: + +* CEL expression evaluated (if any) +* CEL result (which model was selected) +* Provider pool used (primary/fallback) +* Fallback trigger reason (if applicable) +* Rate limit evaluation (allowed/blocked) +* Spend limit evaluation (allowed/blocked) + +Example: + +[source,yaml] +---- +CEL Expression: | + request.headers["x-user-tier"] == "premium" + ? "openai/gpt-5.2" + : "openai/gpt-5.2-mini" + +CEL Result: "openai/gpt-5.2" + +Provider Pool: primary +Fallback Triggered: false + +Rate Limit: allowed (45/100 requests used) +Spend Limit: allowed ($1,234 / $50,000 budget used) +---- + + +=== Performance details tab + +Shows: + +* Total latency breakdown + * Gateway processing time: // PLACEHOLDER: Xms + * Provider API call time: // PLACEHOLDER: Xms + * Network time: // PLACEHOLDER: Xms +* Token generation rate (tokens/second) +* Time to first token (for streaming, if supported) + +Example: + +[source,text] +---- +Total Latency: 1,250ms +├─ Gateway Processing: 12ms +├─ Provider API Call: 1,215ms +└─ Network Overhead: 23ms + +Token Generation Rate: 71 tokens/second +---- + + +== Common log analysis tasks + +=== Task 1: "Why did this request fail?" + +1. Find the request: + + * Filter by timestamp (when user reported issue) + * Or search by request content + * Or filter by custom header (user ID) + +2. Check status: + + * `400` → Client error (bad request format, invalid parameters) + * `401` → Authentication issue + * `404` → Model not found + * `429` → Rate limited + * `500`/`5xx` → Provider or gateway error + +3. Check error message: + + * Gateway error: Issue with configuration, rate limits, etc. + * Provider error: Issue with upstream API (OpenAI, Anthropic, etc.) + +4. Check routing: + * Was fallback triggered? (May indicate primary provider issue) + * Was CEL rule applied correctly? + +Common causes: + +* Model not enabled in gateway +* Rate limit exceeded +* Monthly budget exceeded +* Invalid API key for provider +* Provider outage/rate limit +* Malformed request + +=== Task 2: "Reconstruct a user's conversation" + +1. *Filter by user*: ++ +[source,text] +---- +Filter: request.headers["x-user-id"] = "user_123" +---- + +2. *Sort by timestamp* (ascending) + +3. *Review conversation flow*: + + * Each request shows prompt + * Each response shows AI reply + * Reconstruct full conversation thread + +Use case: User says "the AI contradicted itself" → View full conversation history + +=== Task 3: "Why is latency high for this user?" + +1. *Find user's requests*: ++ +[source,text] +---- +Filter: request.headers["x-user-id"] = "user_123" +AND Latency > 3000ms +---- + +2. *Check Performance Details*: + + * Is gateway processing slow? (Likely CEL complexity) + * Is provider API slow? (Upstream latency) + * Is token generation rate normal? (Tokens/second) + +3. *Compare to other requests*: + + * Filter for same model + * Compare latency percentiles + * Identify if issue is user-specific or model-wide + +Common causes: + +* Complex CEL routing rules +* Provider performance degradation +* Large context windows (high token count) +* Network issues + +=== Task 4: "Which requests used the fallback provider?" + +1. *Filter by fallback*: ++ +[source,text] +---- +Filter: Fallback Triggered = true +---- + +2. *Group by Fallback Reason*: + + * Rate limit exceeded (primary provider throttled) + * Timeout (primary provider slow) + * 5xx error (primary provider error) + +3. *Analyze pattern*: + + * Is fallback happening frequently? (May indicate primary provider issue) + * Is fallback successful? (Check status of fallback requests) + +Use case: Verify failover is working as expected + +=== Task 5: "What did we spend on this customer today?" + +1. *Filter by customer*: ++ +[source,text] +---- +Filter: request.headers["x-customer-id"] = "customer_abc" +AND Timestamp >= "today" +---- + +2. *Sum estimated costs* (if UI supports): + + // PLACEHOLDER: Does UI have cost aggregation for filtered results? + * Total: $X.XX + * Breakdown by model + +3. *Export to CSV* (if supported): + + // PLACEHOLDER: Is CSV export available? + * For detailed billing analysis + +Use case: Chargeback/showback to customers + +== Log retention + +// PLACEHOLDER: Confirm log retention policy + +Retention period: // PLACEHOLDER: e.g., 30 days, 90 days, configurable + +After retention period: + +* Logs are deleted automatically +* Aggregate metrics retained longer (see xref:ai-agents:ai-gateway/observability-metrics.adoc[]) + +Export logs (if needed for longer retention): + +// PLACEHOLDER: Is log export available? Via API? CSV? + +== Log export + +// PLACEHOLDER: Confirm export capabilities + +=== Export to CSV + +// PLACEHOLDER: Add UI path for export, or indicate not available + +1. Apply filters for desired logs +2. Click "Export to CSV" +3. Download includes all filtered logs with full fields + +=== Export via API + +// PLACEHOLDER: If API is available for log export + +[source,bash] +---- +curl https://{CLUSTER_ID}.cloud.redpanda.com/api/ai-gateway/logs \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + -G \ + --data-urlencode "gateway_id=gw_abc123" \ + --data-urlencode "start_time=2025-01-11T00:00:00Z" \ + --data-urlencode "end_time=2025-01-11T23:59:59Z" +---- + + +=== Integration with observability platforms + +// PLACEHOLDER: Are there integrations with external platforms? + +Supported integrations (if any): + +* OpenTelemetry export → Send logs to Jaeger, Datadog, New Relic +* CloudWatch Logs → For AWS deployments +* // PLACEHOLDER: Others? + + +== Privacy and security + +=== What is logged + +// PLACEHOLDER: Confirm what is logged by default + +AI Gateway logs by default: + +* Request headers (including custom headers) +* Request body (full prompt content) +* Response body (full AI response) +* Token usage, cost, latency +* Routing decisions, policy evaluations + +AI Gateway does not log (if applicable): + +* // PLACEHOLDER: Anything redacted? API keys? Specific headers? + +=== Redaction options + +// PLACEHOLDER: Are there options to redact PII or sensitive data? + +If redaction is supported: + +* Configure redaction rules for specific fields +* Mask PII (email addresses, phone numbers, etc.) +* Redact custom header values + +Example: + +[source,yaml] +---- +# PLACEHOLDER: Actual configuration format +redaction: + - field: request.headers.x-api-key + action: mask + - field: request.body.messages[].content + pattern: "\\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,}\\b" # Email regex + action: replace + replacement: "[REDACTED_EMAIL]" +---- + + +=== Access control + +// PLACEHOLDER: Who can view logs? RBAC? + +Permissions required: + +* View logs: // PLACEHOLDER: role/permission name +* Export logs: // PLACEHOLDER: role/permission name + +Audit trail: + +* Log access is audited (who viewed which logs, when) +* // PLACEHOLDER: Where to find audit trail? + +== Troubleshoot log issues + +=== Issue: "Logs not appearing for my request" + +Possible causes: + +1. Log ingestion delay (wait // PLACEHOLDER: Xs) +2. Wrong gateway ID filter +3. Request failed before reaching gateway (authentication error) +4. Time range filter too narrow + +Solution: + +1. Wait a moment and refresh +2. Remove all filters, search by timestamp +3. Check client-side error logs +4. Expand time range to "Last 1 hour" + +=== Issue: "Missing request/response content" + +Possible causes: + +1. Payload too large (// PLACEHOLDER: size limit?) +2. Redaction rules applied +3. // PLACEHOLDER: Other reasons? + +Solution: + +// PLACEHOLDER: How to retrieve full content if truncated? + +=== Issue: "Cost estimate incorrect" + +Possible causes: + +1. Cost estimate based on public pricing (may differ from your contract) +2. Provider changed pricing +3. // PLACEHOLDER: Other reasons? + +Note: Cost estimates are approximate. Use provider invoices for billing. + +== Next steps + +* xref:ai-agents:ai-gateway/observability-metrics.adoc[]: Aggregate analytics and cost tracking. \ No newline at end of file diff --git a/modules/ai-agents/partials/observability-metrics.adoc b/modules/ai-agents/partials/observability-metrics.adoc new file mode 100644 index 000000000..42d7c514f --- /dev/null +++ b/modules/ai-agents/partials/observability-metrics.adoc @@ -0,0 +1,860 @@ += Observability: Metrics and Analytics +:description: Guide to AI Gateway metrics and analytics, including where to find metrics, key metrics explained, dashboard views, filtering/grouping, alerting, exporting, common analysis tasks, retention, API access, best practices, and troubleshooting. +:page-topic-type: reference +:personas: platform_admin, app_developer +:learning-objective-1: Monitor aggregate metrics to track usage patterns and budget adherence +:learning-objective-2: Compare model and provider performance using latency and cost metrics +:learning-objective-3: Configure alerts for budget thresholds and performance degradation + +include::ai-agents:partial$ai-gateway-byoc-note.adoc[] + +AI Gateway provides aggregate metrics and analytics dashboards to help you understand usage patterns, costs, performance, and errors across all your LLM traffic. + +== Before you begin + +* You have an active AI Gateway with at least one request processed. +* You have access to the Redpanda Cloud Console. +* You have the appropriate permissions to view gateway metrics. + +Use metrics for: + +* Cost tracking and budget management +* Usage trends over time +* Performance monitoring (latency, error rates) +* Capacity planning +* Model/provider comparison + +Use logs for: Debugging specific requests, viewing full prompts/responses. See xref:ai-agents:ai-gateway/observability-logs.adoc[]. + +== Where to find metrics + +1. Navigate to analytics dashboard: + * In the sidebar, navigate to *Agentic AI > Gateways > {gateway-name}*, then select the *Analytics* tab. + * Or: Gateway detail page -> Analytics tab + +2. Select gateway (optional): + * View all gateways (org-wide metrics) + * Or filter to specific gateway + +3. Set time range: + * Default: Last 7 days + * Options: Last 24 hours, 7 days, 30 days, 90 days, Custom + * // PLACEHOLDER: screenshot of time range picker + +== Key metrics + +=== Request volume + +What it shows: Total number of requests over time + +// PLACEHOLDER: Screenshot of request volume graph + +Graph type: Time series line chart + +Filters: + +* By gateway +* By model +* By provider +* By status (success/error) + +Use cases: + +* Identify usage patterns (peak hours, days of week) +* Detect traffic spikes or drops +* Capacity planning + +Example insights: + +* "Traffic doubles every Monday morning at 9am" → Scale infrastructure +* "Staging gateway has more traffic than prod" → Investigate runaway testing + +=== Token usage + +What it shows: Prompt, completion, and total tokens consumed + +// PLACEHOLDER: Screenshot of token usage graph + +Graph type: Stacked area chart (prompt vs completion tokens) + +Metrics: + +* Total tokens +* Prompt tokens (input) +* Completion tokens (output) +* Tokens per request (average) + +Breakdowns: + +* By gateway +* By model +* By provider + +Use cases: + +* Understand cost drivers (prompt vs completion tokens) +* Identify verbose prompts or responses +* Optimize token usage + +Example insights: + +* "90% of tokens are completion tokens" → Responses are verbose, optimize max_tokens +* "Staging uses 10x more tokens than prod" → Investigate test suite + +=== Estimated spend + +What it shows: Calculated cost based on token usage and public pricing + +// PLACEHOLDER: Screenshot of cost tracking dashboard + +Graph type: Time series line chart with cost breakdown + +Metrics: + +* Total estimated spend +* Spend by model +* Spend by provider +* Spend by gateway +* Cost per 1K requests +* Cost per 1M tokens + +Breakdowns: + +* By gateway (for chargeback/showback) +* By model (for cost optimization) +* By provider (for negotiation leverage) +* By custom header (if configured, e.g., `x-customer-id`) + +Use cases: + +* Budget tracking ("Are we staying under $50K/month?") +* Cost attribution ("Which team spent the most?") +* Model comparison ("Is Claude cheaper than GPT-4 for our use case?") +* Forecasting ("At this rate, we'll spend $X next month") + +Important notes: + +* *Estimates based on public pricing* (may differ from your contract) +* *Not a substitute for provider invoices* (use for approximation only) +* Update frequency: // PLACEHOLDER: Real-time? Hourly? Daily? + +Example insights: + +* "Customer A accounts for 60% of spend" → Consider rate limits or tiered pricing +* "GPT-5.2 is 3x more expensive than Claude Sonnet for similar quality" → Optimize routing + +=== Latency + +What it shows: Request duration from gateway to provider and back + +// PLACEHOLDER: Screenshot of latency histogram + +Metrics: + +* p50 (median) latency +* p95 latency +* p99 latency +* Min/max latency +* Average latency + +Breakdowns: + +* By gateway +* By model +* By provider +* By token range (longer responses = higher latency) + +Use cases: + +* Identify slow models or providers +* Set SLO targets (e.g., "p95 < 2 seconds") +* Detect performance regressions + +Example insights: + +* "GPT-5.2 p99 latency spiked to 10 seconds yesterday" → Investigate provider issue +* "Claude Sonnet is 30% faster than GPT-5.2 for same prompts" → Optimize for latency + +Latency components (if available): + +// PLACEHOLDER: Does gateway show latency breakdown? +* Gateway processing time +* Provider API time +* Network time + +=== Error rate + +What it shows: Percentage of failed requests over time + +// PLACEHOLDER: Screenshot of error rate graph + +Metrics: + +* Total error rate (%) +* Errors by status code (400, 401, 429, 500, etc.) +* Errors by model +* Errors by provider + +Graph type: Time series line chart with error percentage + +Breakdowns: + +* By error type: + * Client errors (4xx) + * Rate limits (429) + * Server errors (5xx) + * Provider errors + * Gateway errors + +Use cases: + +* Detect provider outages +* Identify configuration issues (e.g., model not enabled) +* Monitor rate limit breaches + +Example insights: + +* "Error rate spiked to 15% at 2pm" → OpenAI outage, fallback to Anthropic worked +* "10% of requests fail with 'model not found'" → Model not enabled in gateway + +=== Success rate + +What it shows: Percentage of successful (200) requests over time + +Metric: `Success Rate = (Successful Requests / Total Requests) × 100` + +Target: Typically 99%+ for production workloads + +Use cases: + +* Monitor overall health +* Set up alerts (e.g., "Alert if success rate < 95%") + +=== Fallback rate + +What it shows: Percentage of requests that used fallback provider + +// PLACEHOLDER: Screenshot of fallback rate graph + +Metric: `Fallback Rate = (Fallback Requests / Total Requests) × 100` + +Breakdowns: + +* By fallback reason: + * Rate limit exceeded + * Timeout + * 5xx error + +Use cases: + +* Monitor primary provider reliability +* Verify fallback is working +* Identify when to renegotiate rate limits + +Example insights: + +* "Fallback rate increased to 20% yesterday" → OpenAI hit rate limits, time to increase quota +* "Zero fallbacks in 30 days" → Fallback config may not be working, or primary provider is very reliable + +== Dashboard views + +=== Overview dashboard + +Shows: High-level metrics across all gateways + +// PLACEHOLDER: Screenshot of overview dashboard + +Widgets: + +* Total requests (last 24h, 7d, 30d) +* Total spend (last 24h, 7d, 30d) +* Success rate (current) +* Average latency (current) +* Top 5 models by request volume +* Top 5 gateways by spend + +Use case: Executive view, health at a glance + +=== Gateway dashboard + +Shows: Metrics for a specific gateway + +// PLACEHOLDER: Screenshot of gateway dashboard + +Widgets: + +* Request volume (time series) +* Token usage (time series) +* Estimated spend (time series) +* Latency percentiles (histogram) +* Error rate (time series) +* Model breakdown (pie chart) +* Provider breakdown (pie chart) + +Use case: Team-specific monitoring, gateway optimization + +=== Model comparison dashboard + +Shows: Side-by-side comparison of models + +// PLACEHOLDER: Screenshot of model comparison + +Metrics per model: + +* Request count +* Total tokens +* Estimated cost +* Cost per 1K requests +* Average latency +* Error rate + +Use case: Evaluate whether to switch models (cost vs performance) + +Example: + +[cols="2,1,1,1,1"] +|=== +| Model | Requests | Avg Latency | Cost per 1K | Error Rate + +| openai/gpt-5.2 +| 10,000 +| 1.2s +| $5.00 +| 0.5% + +| anthropic/claude-sonnet-4.5 +| 5,000 +| 0.9s +| $3.50 +| 0.3% + +| openai/gpt-5.2-mini +| 20,000 +| 0.7s +| $0.50 +| 1.0% +|=== + +Insight: Claude Sonnet is 25% faster and 30% cheaper than GPT-5.2 with better reliability + +=== Provider comparison dashboard + +Shows: Side-by-side comparison of providers + +Metrics per provider: + +* Request count +* Total spend +* Average latency +* Error rate +* Fallback trigger rate + +Use case: Evaluate provider reliability, negotiate contracts + +=== Cost breakdown dashboard + +Shows: Detailed cost analysis + +// PLACEHOLDER: Screenshot of cost breakdown + +Widgets: + +* Spend by gateway (stacked bar chart) +* Spend by model (pie chart) +* Spend by provider (pie chart) +* Spend by custom dimension (if configured, e.g., customer ID) +* Spend trend (time series with forecast) +* Budget utilization (progress bar: $X / $Y monthly limit) + +Use case: FinOps, budget management, chargeback/showback + +== Filter and group + +=== Filter by gateway + +[source,text] +---- +Filter: Gateway = "production-gateway" +---- + + +Shows metrics for specific gateway only. + +Use case: Isolate prod from staging metrics + +=== Filter by model + +[source,text] +---- +Filter: Model = "openai/gpt-5.2" +---- + + +Shows metrics for specific model only. + +Use case: Evaluate model performance in isolation + +=== Filter by provider + +[source,text] +---- +Filter: Provider = "OpenAI" +---- + + +Shows metrics for specific provider only. + +Use case: Evaluate provider reliability + +=== Filter by status + +[source,text] +---- +Filter: Status = "200" // Only successful requests +Filter: Status >= "500" // Only server errors +---- + + +Use case: Focus on errors, or calculate success rate + +=== Filter by custom dimension + +// PLACEHOLDER: Confirm if custom dimensions are supported for filtering + +[source,text] +---- +Filter: request.headers["x-customer-id"] = "customer_abc" +---- + + +Shows metrics for specific customer. + +Use case: Customer-specific cost tracking for chargeback + +=== Group by dimension + +Common groupings: + +* Group by Gateway +* Group by Model +* Group by Provider +* Group by Status +* Group by Hour/Day/Week/Month (time aggregation) + +Example: "Show me spend grouped by model, for production gateway, over last 30 days" + +== Alerting + +// PLACEHOLDER: Confirm if alerting is supported + +If alerting is supported: + +=== Alert types + +Budget alerts: + +* Alert when spend exceeds X% of monthly budget +* Alert when spend grows Y% week-over-week + +Performance alerts: + +* Alert when error rate > X% +* Alert when p99 latency > Xms +* Alert when success rate < X% + +Usage alerts: + +* Alert when request volume drops (potential outage) +* Alert when fallback rate > X% (primary provider issue) + +=== Alert channels + +// PLACEHOLDER: Supported notification channels +* Email +* Slack +* PagerDuty +* Webhook +* // PLACEHOLDER: Others? + +=== Example alert configuration + +[source,yaml] +---- +# PLACEHOLDER: Actual alert configuration format +alerts: + - name: "High Error Rate" + condition: error_rate > 5% + duration: 5 minutes + channels: [slack, email] + + - name: "Budget Threshold" + condition: monthly_spend > 80% of budget + channels: [email] + + - name: "Latency Spike" + condition: p99_latency > 5000ms + duration: 10 minutes + channels: [pagerduty] +---- + +== Export metrics + +// PLACEHOLDER: Confirm export capabilities + +=== Export to CSV + +1. Apply filters for desired metrics +2. Click "Export to CSV" +3. Download includes time series data + +Use case: Import into spreadsheet for analysis, reporting + +=== Export via API + +// PLACEHOLDER: If API is available for metrics + +[source,bash] +---- +curl https://{CLUSTER_ID}.cloud.redpanda.com/api/ai-gateway/metrics \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + -G \ + --data-urlencode "gateway_id=gw_abc123" \ + --data-urlencode "start_time=2025-01-01T00:00:00Z" \ + --data-urlencode "end_time=2025-01-31T23:59:59Z" \ + --data-urlencode "metric=requests,tokens,cost" +---- + + +Response: + +[source,json] +---- +{ + "gateway_id": "gw_abc123", + "start_time": "2025-01-01T00:00:00Z", + "end_time": "2025-01-31T23:59:59Z", + "metrics": { + "requests": 1000000, + "tokens": 500000000, + "estimated_cost": 2500.00 + } +} +---- + + +=== Integration with observability platforms + +Supported integrations: + +* *Prometheus*: Native metrics endpoint on port 9090 at `/metrics` +* *OpenTelemetry*: Traces exported to Redpanda topics via the OpenTelemetry exporter + +== Common analysis tasks + +=== "Are we staying within budget?" + +1. View cost breakdown dashboard +2. Check budget utilization widget: + * Current spend: $X + * Monthly budget: $Y + * Utilization: X% + * Days remaining in month: Z +3. Forecast: + * At current rate: $X × (30 / days_elapsed) + * On track to exceed budget? Yes/No + +Action: + +* If approaching limit: Adjust rate limits, optimize models, pause non-prod usage +* If well under budget: Opportunity to test more expensive models + +=== "Which team is using the most resources?" + +1. Filter by gateway (assuming one gateway per team) +2. *Sort by Spend* (descending) +3. View table: + +[cols="2,1,1,1,1"] +|=== +| Gateway | Requests | Tokens | Spend | % of Total + +| team-ml +| 500K +| 250M +| $1,250 +| 50% + +| team-product +| 300K +| 150M +| $750 +| 30% + +| team-eng +| 200K +| 100M +| $500 +| 20% +|=== + +Action: Chargeback costs to teams, or investigate high-usage teams + +=== "Is this model worth the extra cost?" + +1. *Open Model Comparison Dashboard* +2. Select models to compare: + * Expensive model: `openai/gpt-5.2` + * Cheap model: `openai/gpt-5.2-mini` +3. Compare metrics: + +[cols="2,1,1,2"] +|=== +| Metric | GPT-5.2 | GPT-5.2-mini | Difference + +| Cost per 1K requests +| $5.00 +| $0.50 +| *10x* + +| Avg Latency +| 1.2s +| 0.7s +| 58% *faster* (mini) + +| Error Rate +| 0.5% +| 1.0% +| 2x errors (mini) +|=== + +Decision: If mini's error rate is acceptable, save 10x on costs + +=== "Why did costs spike yesterday?" + +1. View cost trend graph +2. Identify spike (e.g., Jan 10th: $500 vs usual $100) +3. Drill down: + * By gateway: Which gateway caused the spike? + * By model: Did someone switch to expensive model? + * By hour: What time did spike occur? +4. Cross-reference with logs: + * Filter logs to spike timeframe + * Check for unusual request patterns + * Identify custom header (user ID, customer ID) if present + +Common causes: + +* Test suite running against prod gateway +* A/B test routing all traffic to expensive model +* User error (wrong model in config) +* Runaway loop in application code + +=== "Is provider X more reliable than provider Y?" + +1. Open provider comparison dashboard +2. Compare error rates: + +[cols="2,1,1,2"] +|=== +| Provider | Requests | Error Rate | Fallback Triggers + +| OpenAI +| 500K +| 0.8% +| 50 (rate limits) + +| Anthropic +| 300K +| 0.3% +| 5 (timeouts) +|=== + +Insight: Anthropic has 62% lower error rate + +3. Compare latencies: + +[cols="2,1,1"] +|=== +| Provider | p50 Latency | p99 Latency + +| OpenAI +| 1.0s +| 3.5s + +| Anthropic +| 0.8s +| 2.5s +|=== + +Insight: Anthropic is 20% faster at p50, 28% faster at p99 + +Decision: Prioritize Anthropic in routing pools + +== Metrics retention + +// PLACEHOLDER: Confirm metrics retention policy + +Retention period: + +* *High-resolution* (1-minute granularity): // PLACEHOLDER: for example, 7 days +* *Medium-resolution* (1-hour granularity): // PLACEHOLDER: for example, 30 days +* *Low-resolution* (1-day granularity): // PLACEHOLDER: for example, 1 year + +Note: Aggregate metrics retained longer than individual request logs + +== API access to metrics + +// PLACEHOLDER: Document metrics API if available + +=== List available metrics + +[source,bash] +---- +curl https://{CLUSTER_ID}.cloud.redpanda.com/api/ai-gateway/metrics/list \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" +---- + + +Response: + +[source,json] +---- +{ + "metrics": [ + "requests", + "tokens.prompt", + "tokens.completion", + "tokens.total", + "cost.estimated", + "latency.p50", + "latency.p95", + "latency.p99", + "errors.rate", + "success.rate", + "fallback.rate" + ] +} +---- + + +=== Query specific metric + +[source,bash] +---- +curl https://{CLUSTER_ID}.cloud.redpanda.com/api/ai-gateway/metrics/query \ + -H "Authorization: Bearer ${REDPANDA_CLOUD_TOKEN}" \ + -H "Content-Type: application/json" \ + -d '{ + "metric": "requests", + "gateway_id": "gw_abc123", + "start_time": "2025-01-01T00:00:00Z", + "end_time": "2025-01-31T23:59:59Z", + "granularity": "1d", + "group_by": ["model"] + }' +---- + + +Response: + +[source,json] +---- +{ + "metric": "requests", + "granularity": "1d", + "data": [ + { + "timestamp": "2025-01-01T00:00:00Z", + "model": "openai/gpt-5.2", + "value": 10000 + }, + { + "timestamp": "2025-01-01T00:00:00Z", + "model": "anthropic/claude-sonnet-4.5", + "value": 5000 + }, + ... + ] +} +---- + + +== Best practices + +Set up budget alerts early + +* Don't wait for surprise bills +* Alert at 50%, 80%, 90% of budget +* Include multiple stakeholders (eng, finance) + +Create team dashboards + +* One dashboard per team showing their gateway(s) +* Empowers teams to self-optimize +* Reduces central ops burden + +Monitor fallback rate + +* Low fallback rate (0-5%): Normal, failover working +* High fallback rate (>20%): Investigate primary provider issues +* Zero fallback rate: Verify fallback config is correct + +Compare models regularly + +* Run A/B tests with metrics +* Reassess as pricing and models change +* Don't assume expensive = better quality for your use case + +Track trends, not point-in-time + +* Day-to-day variance is normal +* Look for week-over-week and month-over-month trends +* Seasonal patterns (e.g., more usage on weekdays) + +== Troubleshoot metrics issues + +=== Issue: "Metrics don't match my provider invoice" + +Possible causes: + +1. Metrics are estimates based on public pricing +2. Your contract has custom pricing +3. Provider changed pricing mid-month + +Solution: + +* Use metrics for trends and optimization decisions +* Use provider invoices for actual billing +* // PLACEHOLDER: Can users configure custom pricing in gateway? + +=== Issue: "Metrics are delayed or missing" + +Possible causes: + +1. Metrics aggregation has delay (// PLACEHOLDER: typical delay?) +2. Time range outside retention period +3. No requests in selected time range (empty data) + +Solution: + +1. Wait and refresh (// PLACEHOLDER: Xminutes typical delay) +2. Check retention policy +3. Verify requests were sent (check logs) + +=== Issue: "Dashboard shows 'no data'" + +Possible causes: + +1. Filters too restrictive (no matching requests) +2. Gateway has no traffic yet +3. Permissions issue (can't access this gateway's metrics) + +Solution: + +1. Remove filters, widen time range +2. Send test request (see xref:ai-agents:ai-gateway/gateway-quickstart.adoc[]) +3. Check permissions with admin + +== Next steps + +* xref:ai-agents:ai-gateway/observability-logs.adoc[]: View individual requests and debug issues.