fix(openai): handled text_format from responses api analogously to response_format from chat completions#1573
Open
D-Joey-G wants to merge 1 commit intolangfuse:mainfrom
Open
fix(openai): handled text_format from responses api analogously to response_format from chat completions#1573D-Joey-G wants to merge 1 commit intolangfuse:mainfrom
text_format from responses api analogously to response_format from chat completions#1573D-Joey-G wants to merge 1 commit intolangfuse:mainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes issue where arguments passed to the
text_formatparameter of the OpenAI responses API are dropped. Now such arguments are handled akin to theresponse_formatarguments in the chat.completions API. See relevant issue report here: langfuse/langfuse#10143Disclaimer: Experimental PR review
Greptile Summary
This PR fixes a bug where arguments passed to
text_format(the structured-output parameter in the OpenAI Responses API) were silently dropped from Langfuse metadata, treating it analogously to howresponse_formatis handled for chat completions.Key changes in
langfuse/openai.py:_resolve_format_metadata(key, kwargs)— a small helper that either serialises aBaseModelsubclass to its JSON schema or returns the value verbatim. This refactors the previously inlineresponse_formatlogic and reuses it fortext_format.OpenAiArgsExtractor.__init__guard condition from checking onlyresponse_formatto checking eitherresponse_formatortext_format, then spreading both resolved dicts into the metadata.get_openai_argsto also poptext_formatfromkwargs["metadata"], consistent with howresponse_formatwas already handled.The logic is correct: the
_resolve_format_metadatahelper safely returns{}when the key is absent, so both keys are always unpacked without risk of aKeyError. Theand/orsemantics of the guard condition are right — the else branch handles any case where at least one format key is present.The only gap is that no automated tests were added for the
text_formatpath, leaving the new behaviour untested.Confidence Score: 4/5
text_formatpath.text_format— the existingresponse_formattests give some indirect confidence but do not exercise the new code paths.langfuse/openai.pywould benefit from follow-up tests.Important Files Changed
_resolve_format_metadatahelper to handletext_formatfrom the OpenAI Responses API analogously toresponse_formatin chat completions. Logic is correct but no tests were added for the new behaviour.Flowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A[OpenAiArgsExtractor.__init__] --> B{response_format or\ntext_format present?} B -- Neither present --> C[Use metadata as-is] B -- At least one present --> D[Build merged metadata dict] D --> E[_resolve_format_metadata called\nfor each format key] E --> F{Value is a BaseModel\nsubclass?} F -- Yes --> G[Serialize via model_json_schema] F -- No --> H[Use value directly] F -- Key absent --> I[Return empty dict] G & H & I --> J[Spread-merge into metadata] C & J --> K[self.args metadata assigned] K --> L[get_openai_args called] L --> M{Model distillation\nenabled?} M -- Yes --> N[Pop response_format and\ntext_format from kwargs metadata] M -- No --> O[Return kwargs unchanged] N --> OReviews (1): Last reviewed commit: "handle text format analogously to respon..." | Re-trigger Greptile
(3/5) Reply to the agent's comments like "Can you suggest a fix for this @greptileai?" or ask follow-up questions!