Monitoring

AI Bridge records the last user prompt, token usage, and every tool invocation for each intercepted request. Each capture is tied to a single "interception" that maps back to the authenticated Coder identity, making it easy to attribute spend and behaviour.

User Prompt logging

User Leaderboard

We provide an example Grafana dashboard that you can import as a starting point for your metrics. See the Grafana dashboard README.

These logs and metrics can be used to determine usage patterns, track costs, and evaluate tooling adoption.

Exporting Data

AI Bridge interception data can be exported for external analysis, compliance reporting, or integration with log aggregation systems.

REST API

You can retrieve AI Bridge interceptions via the Coder API with filtering and pagination support.

curl -X GET "https://coder.example.com/api/v2/aibridge/interceptions?q=initiator:me" \ -H "Coder-Session-Token: $CODER_SESSION_TOKEN"

Available query filters:

  • initiator - Filter by user ID or username
  • provider - Filter by AI provider (e.g., openai, anthropic)
  • model - Filter by model name
  • started_after - Filter interceptions after a timestamp
  • started_before - Filter interceptions before a timestamp

See the API documentation for full details.

CLI

Export interceptions as JSON using the CLI:

coder aibridge interceptions list --initiator me --limit 1000

You can filter by time range, provider, model, and user:

coder aibridge interceptions list \ --started-after "2025-01-01T00:00:00Z" \ --started-before "2025-02-01T00:00:00Z" \ --provider anthropic

See coder aibridge interceptions list --help for all options.

Data Retention

AI Bridge data is retained for 60 days by default. Configure the retention period to balance storage costs with your organization's compliance and analysis needs.

For configuration options and details, see Data Retention in the AI Bridge setup guide.

Tracing

AI Bridge supports tracing via OpenTelemetry, providing visibility into request processing, upstream API calls, and MCP server interactions.

Enabling Tracing

AI Bridge tracing is enabled when tracing is enabled for the Coder server. To enable tracing set CODER_TRACE_ENABLE environment variable or --trace CLI flag:

export CODER_TRACE_ENABLE=true
coder server --trace

What is Traced

AI Bridge creates spans for the following operations:

Span NameDescription
CachedBridgePool.AcquireAcquiring a request bridge instance from the pool
InterceptTop-level span for processing an intercepted request
Intercept.CreateInterceptorCreating the request interceptor
Intercept.ProcessRequestProcessing the request through the bridge
Intercept.ProcessRequest.UpstreamForwarding the request to the upstream AI provider
Intercept.ProcessRequest.ToolCallExecuting a tool call requested by the AI model
Intercept.RecordInterceptionRecording creating interception record
Intercept.RecordPromptUsageRecording prompt/message data
Intercept.RecordTokenUsageRecording token consumption
Intercept.RecordToolUsageRecording tool/function calls
Intercept.RecordInterceptionEndedRecording the interception as completed
ServerProxyManager.InitInitializing MCP server proxy connections
StreamableHTTPServerProxy.InitSetting up HTTP-based MCP server proxies
StreamableHTTPServerProxy.Init.fetchToolsFetching available tools from MCP servers

Example trace of an interception using Jaeger backend:

Trace of interception

Capturing Logs in Traces

Note: Enabling log capture may generate a large volume of trace events.

To include log messages as trace events, enable trace log capture by setting CODER_TRACE_LOGS environment variable or using --trace-logs flag:

export CODER_TRACE_ENABLE=true export CODER_TRACE_LOGS=true
coder server --trace --trace-logs