Skip to content

fix(openai): handled text_format from responses api analogously to response_format from chat completions#1573

Open
D-Joey-G wants to merge 1 commit intolangfuse:mainfrom
D-Joey-G:fix/text-format
Open

fix(openai): handled text_format from responses api analogously to response_format from chat completions#1573
D-Joey-G wants to merge 1 commit intolangfuse:mainfrom
D-Joey-G:fix/text-format

Conversation

@D-Joey-G
Copy link
Contributor

@D-Joey-G D-Joey-G commented Mar 22, 2026

Fixes issue where arguments passed to the text_format parameter of the OpenAI responses API are dropped. Now such arguments are handled akin to the response_format arguments in the chat.completions API. See relevant issue report here: langfuse/langfuse#10143

Disclaimer: Experimental PR review

Greptile Summary

This PR fixes a bug where arguments passed to text_format (the structured-output parameter in the OpenAI Responses API) were silently dropped from Langfuse metadata, treating it analogously to how response_format is handled for chat completions.

Key changes in langfuse/openai.py:

  • Introduces _resolve_format_metadata(key, kwargs) — a small helper that either serialises a BaseModel subclass to its JSON schema or returns the value verbatim. This refactors the previously inline response_format logic and reuses it for text_format.
  • Updates the OpenAiArgsExtractor.__init__ guard condition from checking only response_format to checking either response_format or text_format, then spreading both resolved dicts into the metadata.
  • Extends the model-distillation cleanup path in get_openai_args to also pop text_format from kwargs["metadata"], consistent with how response_format was already handled.

The logic is correct: the _resolve_format_metadata helper safely returns {} when the key is absent, so both keys are always unpacked without risk of a KeyError. The and/or semantics of the guard condition are right — the else branch handles any case where at least one format key is present.

The only gap is that no automated tests were added for the text_format path, leaving the new behaviour untested.

Confidence Score: 4/5

  • PR is safe to merge; the logic is correct and the fix is well-scoped, but no tests cover the new text_format path.
  • The implementation is logically sound: the helper correctly handles the absent-key, plain-value, and BaseModel-subclass cases, the guard condition uses the right boolean operator, and the distillation cleanup is consistent. The only reason to hold back from a 5 is the absence of automated tests for text_format — the existing response_format tests give some indirect confidence but do not exercise the new code paths.
  • No files require special attention for correctness; langfuse/openai.py would benefit from follow-up tests.

Important Files Changed

Filename Overview
langfuse/openai.py Adds _resolve_format_metadata helper to handle text_format from the OpenAI Responses API analogously to response_format in chat completions. Logic is correct but no tests were added for the new behaviour.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[OpenAiArgsExtractor.__init__] --> B{response_format or\ntext_format present?}
    B -- Neither present --> C[Use metadata as-is]
    B -- At least one present --> D[Build merged metadata dict]
    D --> E[_resolve_format_metadata called\nfor each format key]
    E --> F{Value is a BaseModel\nsubclass?}
    F -- Yes --> G[Serialize via model_json_schema]
    F -- No --> H[Use value directly]
    F -- Key absent --> I[Return empty dict]
    G & H & I --> J[Spread-merge into metadata]
    C & J --> K[self.args metadata assigned]
    K --> L[get_openai_args called]
    L --> M{Model distillation\nenabled?}
    M -- Yes --> N[Pop response_format and\ntext_format from kwargs metadata]
    M -- No --> O[Return kwargs unchanged]
    N --> O
Loading

Reviews (1): Last reviewed commit: "handle text format analogously to respon..." | Re-trigger Greptile

(3/5) Reply to the agent's comments like "Can you suggest a fix for this @greptileai?" or ask follow-up questions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant