Refactor API proxy to use supportsLLMGateway engine flag#15557
Refactor API proxy to use supportsLLMGateway engine flag#15557pelikhan merged 3 commits intofix/enable-api-proxy-codex-claudefrom
Conversation
…pi-proxy flag - Set Claude engine supportsLLMGateway: true - Set Codex engine supportsLLMGateway: false - Refactor docker.go to check engine's supportsLLMGateway flag instead of hardcoded engine name - Refactor claude_engine.go to conditionally add --enable-api-proxy based on supportsLLMGateway - Add same conditional logic to codex_engine.go for future LLM gateway support - Update tests to verify behavior for all engines Co-authored-by: pelikhan <[email protected]>
|
Just as a heads up, I was blocked by some firewall rules while working on your feedback. Expand below for details. Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
There was a problem hiding this comment.
Pull request overview
This PR refactors the API proxy feature to use the supportsLLMGateway engine capability flag instead of hardcoded engine name checks. The api-proxy is a sidecar container that securely holds LLM API keys and proxies requests through the firewall.
Changes:
- Set
supportsLLMGateway: truefor Claude engine (enabling api-proxy), andfalsefor Codex engine (disabling api-proxy) - Refactored
docker.goto query the engine registry and checkSupportsLLMGateway()instead of hardcodingworkflowData.AI == "claude" - Updated engine execution steps to conditionally add
--enable-api-proxyflag based onSupportsLLMGateway()capability - Added test coverage for all three engines (Claude, Copilot, Codex)
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| pkg/workflow/enable_api_proxy_test.go | Updated test descriptions and added Codex test case to verify engines without LLM gateway support don't include the --enable-api-proxy flag |
| pkg/workflow/docker_api_proxy_test.go | Updated test name and descriptions, added Codex test case to verify api-proxy image is not collected for engines without LLM gateway support |
| pkg/workflow/docker.go | Refactored to use engine.SupportsLLMGateway() instead of hardcoded engine check, updated comments to reflect capability-based approach |
| pkg/workflow/codex_engine.go | Set supportsLLMGateway: false and added conditional logic to check flag before adding --enable-api-proxy |
| pkg/workflow/claude_engine.go | Set supportsLLMGateway: true and wrapped existing --enable-api-proxy logic in conditional check |
Comments suppressed due to low confidence (1)
pkg/workflow/codex_engine.go:44
- The test file
pkg/workflow/strict_mode_llm_gateway_test.goalso needs to be updated to reflect the new LLM gateway configuration. Currently, that test expects Codex to support LLM gateway (line 306:expectedLLMGateway: true) and Claude to not support it (line 316:expectedLLMGateway: false), which is the opposite of the changes in this PR. The test descriptions throughout that file also reference "codex engine with LLM gateway support" and "claude engine without LLM gateway support", which will now be incorrect.
supportsLLMGateway: false, // Codex does not support LLM gateway
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // - Port 10000: OpenAI API proxy (for Codex) | ||
| // - Port 10001: Anthropic API proxy (for Claude) |
There was a problem hiding this comment.
The comment mentions "Port 10000: OpenAI API proxy (for Codex)" but Codex has been configured with supportsLLMGateway: false in this PR, meaning it won't use the api-proxy. This comment should be updated to reflect that only Claude currently uses the api-proxy, or the comment should clarify that Port 10000 is available for future use but not currently utilized by Codex.
| // - Port 10000: OpenAI API proxy (for Codex) | |
| // - Port 10001: Anthropic API proxy (for Claude) | |
| // - Port 10000: OpenAI API proxy (reserved for future use; Codex does not currently use the LLM gateway) | |
| // - Port 10001: Anthropic API proxy (for Claude; currently the only engine using the LLM gateway) |
| // Enable API proxy sidecar if this engine supports LLM gateway | ||
| // The api-proxy container holds the LLM API keys and proxies requests through the firewall | ||
| if e.SupportsLLMGateway() { | ||
| awfArgs = append(awfArgs, "--enable-api-proxy") | ||
| codexEngineLog.Print("Added --enable-api-proxy for LLM API proxying") | ||
| } | ||
|
|
There was a problem hiding this comment.
The conditional check if e.SupportsLLMGateway() will always be false for Codex since supportsLLMGateway is set to false at line 44. This means the --enable-api-proxy flag will never be added for Codex. While this appears intentional based on the configuration change, this entire conditional block (lines 250-255) is now unreachable dead code for the Codex engine and could be removed for clarity. Alternatively, if Codex is intended to support LLM gateway in the future, the configuration at line 44 should be changed to true.
This issue also appears on line 44 of the same file.
| // Enable API proxy sidecar if this engine supports LLM gateway | |
| // The api-proxy container holds the LLM API keys and proxies requests through the firewall | |
| if e.SupportsLLMGateway() { | |
| awfArgs = append(awfArgs, "--enable-api-proxy") | |
| codexEngineLog.Print("Added --enable-api-proxy for LLM API proxying") | |
| } |
* Enable --enable-api-proxy for Claude and Codex engines, pre-pull api-proxy image The AWF api-proxy sidecar securely holds LLM API keys and proxies requests through the firewall. It exposes two endpoints: - Port 10000: OpenAI API proxy (for Codex) - Port 10001: Anthropic API proxy (for Claude) Changes: - claude_engine.go: Add --enable-api-proxy to AWF args when firewall is enabled - codex_engine.go: Add --enable-api-proxy to AWF args when firewall is enabled - docker.go: Pre-pull ghcr.io/github/gh-aw-firewall/api-proxy image for Claude and Codex engines (required because --skip-pull is used) - Add unit tests for docker image collection and engine flag generation - Recompile all workflow lock files Fixes the smoke-claude failure where AWF tried to start the api-proxy container but the image wasn't pre-pulled: Container awf-api-proxy Error response from daemon: No such image: ghcr.io/github/gh-aw-firewall/api-proxy:0.16.5 Note: The api-proxy Docker image must also be published to GHCR via the gh-aw-firewall release workflow before smoke tests will pass. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]> * Bump DefaultFirewallVersion to v0.17.0 v0.17.0 includes the api-proxy container image in the release pipeline, which is required for --enable-api-proxy to work with --skip-pull. Recompiled all 150 workflow lock files to reference v0.17.0 images. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]> * Update frontmatter hash and correct GitHub domain in allowed domains list * [WIP] Enable --enable-api-proxy for Claude and Codex engines (#15550) * Initial plan * Revert Codex API proxy changes - Remove --enable-api-proxy flag from codex_engine.go - Update docker.go to exclude Codex from API proxy image collection - Remove Codex test case from enable_api_proxy_test.go - Remove Codex test case from docker_api_proxy_test.go - Recompile workflows: smoke-codex, codex-github-remote-mcp-test, changeset Co-authored-by: pelikhan <[email protected]> --------- Co-authored-by: copilot-swe-agent[bot] <[email protected]> Co-authored-by: pelikhan <[email protected]> * Refactor download container images command to remove api-proxy from multiple workflows * Refactor API proxy to use supportsLLMGateway engine flag (#15557) * Initial plan * Use supportsLLMGateway flag to control api-proxy image and --enable-api-proxy flag - Set Claude engine supportsLLMGateway: true - Set Codex engine supportsLLMGateway: false - Refactor docker.go to check engine's supportsLLMGateway flag instead of hardcoded engine name - Refactor claude_engine.go to conditionally add --enable-api-proxy based on supportsLLMGateway - Add same conditional logic to codex_engine.go for future LLM gateway support - Update tests to verify behavior for all engines Co-authored-by: pelikhan <[email protected]> * Apply formatting to claude_engine.go --------- Co-authored-by: copilot-swe-agent[bot] <[email protected]> Co-authored-by: pelikhan <[email protected]> --------- Co-authored-by: Claude Opus 4.6 (1M context) <[email protected]> Co-authored-by: Peli de Halleux <[email protected]> Co-authored-by: Copilot <[email protected]> Co-authored-by: pelikhan <[email protected]>
The API proxy feature was using hardcoded engine checks (
workflowData.AI == "claude") instead of leveraging the existingsupportsLLMGatewaycapability flag in the agentic engine architecture.Changes
supportsLLMGateway: truefor Claude,falsefor Codex/Copilotdocker.go): Query engine registry and checkengine.SupportsLLMGateway()instead of hardcoded string comparisonclaude_engine.go,codex_engine.go): Conditionally add--enable-api-proxybased one.SupportsLLMGateway()Before/After
The api-proxy sidecar proxies LLM API calls through the firewall (port 10000 for OpenAI, port 10001 for Anthropic) and is now conditionally enabled based on engine capabilities rather than engine identity.
💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.