Skip to content

Fixed integration with OpenAI Codex (v0.114.0) with gpt-5.4#41

Open
dabogee wants to merge 2 commits intoCaddyGlow:mainfrom
dabogee:codex__v_0_114_0__gpt_5_4__fix
Open

Fixed integration with OpenAI Codex (v0.114.0) with gpt-5.4#41
dabogee wants to merge 2 commits intoCaddyGlow:mainfrom
dabogee:codex__v_0_114_0__gpt_5_4__fix

Conversation

@dabogee
Copy link

@dabogee dabogee commented Mar 13, 2026

Fix Codex proxy compatibility for HTTP, WebSocket. Integration with Microsoft Agent Framework, CrewAI and OpenAI Agents SDK has been validated.

Summary

This PR extends the Codex proxy compatibility work beyond the original HTTP and WebSocket fixes. In addition to making native Codex Desktop / codex exec traffic work end-to-end, it adds support for OpenAI-compatible multi-agent workflows, improves bypass-mode behavior, and makes mock responses format-aware across chat, responses, and Anthropic routes.

After these changes:

  • POST /codex/v1/responses succeeds through the proxy
  • POST /codex/v1/chat/completions succeeds through the proxy
  • codex exec works through OPENAI_BASE_URL=http://127.0.0.1:8000/codex/v1
  • Codex Desktop no longer falls back because of missing WebSocket support
  • Codex model discovery no longer fails on missing response fields from /codex/v1/models
  • OpenAI-compatible MSAF-style clients can send Codex requests without inheriting captured Codex CLI payloads
  • Mock/bypass mode returns the right response envelope for chat, responses, and Anthropic endpoints

What Was Broken

  • Plain JSON requests to Codex upstream were missing required Codex-specific request fields and were rejected with 400 Bad Request
  • Encoded request bodies from native Codex clients could trigger decode failures in the adapter
  • Partial detection cache state could leave required instructions or forwarded CLI headers missing
  • /codex/v1/models did not expose the richer model metadata expected by Codex Desktop
  • The proxy had no WebSocket support for Codex responses
  • Empty WebSocket warmup requests were incorrectly forwarded upstream and failed
  • Synthetic warmup response IDs could be reused as previous_response_id, which upstream rejected
  • The WebSocket route processed only one request per connection, while Codex clients reuse the same socket
  • OpenAI-compatible agent clients could be polluted by injected Codex CLI detection payloads even when they were sending their own instructions and reasoning settings
  • OpenAI thinking blocks were always serialized with XML wrappers, even when the runtime setting disabled them
  • Bypass mode always routed provider plugins through their real adapters, instead of using the mock pipeline
  • Mock responses were only loosely OpenAI-aware and could return the wrong response shape for /chat/completions vs /responses
  • Mock generation had no prompt-aware deterministic path for the login-form workshop scenario used by the new agent tests

Tests

Added or updated coverage for:

  • Codex adapter behavior with detection payload injection disabled
  • preservation of user-supplied reasoning for OpenAI-compatible Codex requests
  • propagation and task isolation of openai_thinking_xml
  • bypass-mode provider factory behavior
  • mock adapter format resolution from format_chain and endpoint fallback
  • prompt extraction and prompt-aware mock responses
  • MSAF-style OpenAI chat requests reaching Codex without Codex CLI prompt pollution
  • real Agent Framework client flows running through the Codex proxy
  • sequential agent-style calls keeping reasoning hidden and message output clean
  • existing WebSocket, warmup, and /models compatibility paths

Validation

Previously validated flows remain:

  • uv build
  • POST /codex/v1/responses returned 200 with OK
  • POST /codex/v1/chat/completions returned 200 with OK
  • codex exec through the proxy returned OK

Powered by gpt-5.4, reviewed by claude-4.5-sonnet, Junie

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant