fix: preserve reasoning in assistant tool-call conversion#393
fix: preserve reasoning in assistant tool-call conversion#393skulldogged wants to merge 3 commits intospacedriveapp:mainfrom
Conversation
WalkthroughAdds extraction and preservation of Assistant reasoning content into OpenAI-style message conversions by collecting textual reasoning parts and attaching them as a reasoning_content field; updates conversion logic to handle reasoning-only messages and tool-call combinations; refactors a signal-target flow in a messaging helper. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR updates the OpenAI (Chat Completions + Responses API) request conversion to preserve assistant “reasoning” parts when the assistant message includes tool calls, and adds regression tests around that behavior.
Changes:
- Extend OpenAI Chat Completions message conversion to extract
AssistantContent::Reasoninginto a newreasoning_contentfield. - Extend OpenAI Responses API input conversion to carry assistant reasoning alongside tool calls (and refactor tool calls to be appended after the assistant message).
- Add tests ensuring reasoning is preserved for assistant tool-call turns in both conversion paths.
Comments suppressed due to low confidence (2)
src/llm/model.rs:1396
reasoning_contentis not part of the standard OpenAI Chat Completions request message schema (the usual fields arerole,content,tool_calls, etc.). Sending this extra key can cause strict OpenAI-compatible providers to reject the request with a 4xx. Consider either (a) omitting assistant reasoning from outbound requests, or (b) folding the collected reasoning text intocontentwhen you need to preserve it for tool-call turns (e.g., whencontentwould otherwise be absent).
"arguments": args_string,
}
}));
src/llm/model.rs:1510
- For the Responses API input format, message items in this function are otherwise modeled with a
contentarray (e.g.,input_text/output_text). Emitting an assistant message that only containsreasoning_content(and nocontent) is likely to be invalid for OpenAI/strict OpenAI-compatible Responses endpoints. A safer representation would be to either drop reasoning on outbound requests, or encode it as anoutput_textcontent item (or another supported content block type) rather than adding a non-standard top-level field.
.unwrap_or(&tool_call.id);
function_calls.push(serde_json::json!({
"type": "function_call",
"name": tool_call.function.name,
"arguments": arguments,
"call_id": call_id,
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/llm/model.rs`:
- Around line 1408-1424: The current collect_reasoning_text_parts(reasoning:
&rig::message::Reasoning) flattens reasoning into plain text; instead preserve
reasoning as a distinct InputContent::Reasoning item so Responses API roundtrips
correctly: update code paths that call collect_reasoning_text_parts to construct
and emit an InputContent::Reasoning (including the original reasoning.id, a
summary array of Summary items with type "summary_text" populated from non-empty
summary/text parts, plus any encrypted_content and status) rather than
concatenating strings; ensure encrypted and redacted variants are preserved on
the InputContent::Reasoning (and adjust callers that expect Vec<String>
accordingly).
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/llm/model.rs (1)
1367-1412:⚠️ Potential issue | 🟠 MajorThe fix still misses parsed reasoning+tool-call turns.
These serializers only emit
reasoning_contentwhen history already containsAssistantContent::Reasoning, but the non-stream parsers later in this file still strip that distinction on tool-call responses:parse_openai_response()drops reasoning oncetool_callsmakeassistant_contentnon-empty, andparse_openai_responses_response()demotes reasoning output intoAssistantContent::Text. That means an assistant tool-call turn parsed from a provider response can still round-trip back out withoutreasoning_content, so the provider rejection this PR targets can reappear on the next tool submission. Please add a parse→convert regression forreasoning + tool_callpayloads and preserve reasoning asAssistantContent::Reasoningthrough those parser paths.Regression test sketch
+ #[test] + fn parse_openai_responses_roundtrips_reasoning_content_with_tool_calls() { + let response = parse_openai_responses_response( + serde_json::json!({ + "output": [ + { + "type": "reasoning", + "summary": [{"text": "inspect identity files"}] + }, + { + "type": "function_call", + "call_id": "call_1", + "name": "file", + "arguments": "{\"operation\":\"list\",\"path\":\".\"}" + } + ], + "usage": { + "input_tokens": 1, + "output_tokens": 1, + "input_tokens_details": {"cached_tokens": 0} + } + }), + "OpenAI", + ) + .expect("valid response"); + + let history = OneOrMany::one(Message::Assistant { + id: None, + content: response.choice, + }); + + let converted = convert_messages_to_openai_responses(&history); + assert_eq!(converted[0]["reasoning_content"], "inspect identity files"); + }Based on learnings, flattening
AssistantContent::Reasoningintoreasoning_contentis the intended scoped fix here, so the missing piece is retainingAssistantContent::Reasoningthrough the parser path.Also applies to: 1486-1539, 3155-3240
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/llm/model.rs` around lines 1367 - 1412, The serializers currently emit reasoning_content only when AssistantContent::Reasoning exists, but parse_openai_response and parse_openai_responses_response demote reasoning into AssistantContent::Text when tool_calls are present, losing the reasoning marker; update those parser paths so that when a parsed response contains both reasoning text and tool_call metadata you convert/retain that turn as AssistantContent::Reasoning (not Text), preserving reasoning parts (the same shape consumed by collect_reasoning_text_parts) and still emitting tool_calls and reasoning_content in the outgoing serializer; adjust parse_openai_response(), parse_openai_responses_response(), and any helper that demotes reasoning to ensure parsed "reasoning + tool_call" payloads produce AssistantContent::Reasoning, and add a regression test that parses a provider response with reasoning + tool_call and round-trips it through the serializer to assert reasoning_content is present.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/llm/model.rs`:
- Around line 1367-1412: The serializers currently emit reasoning_content only
when AssistantContent::Reasoning exists, but parse_openai_response and
parse_openai_responses_response demote reasoning into AssistantContent::Text
when tool_calls are present, losing the reasoning marker; update those parser
paths so that when a parsed response contains both reasoning text and tool_call
metadata you convert/retain that turn as AssistantContent::Reasoning (not Text),
preserving reasoning parts (the same shape consumed by
collect_reasoning_text_parts) and still emitting tool_calls and
reasoning_content in the outgoing serializer; adjust parse_openai_response(),
parse_openai_responses_response(), and any helper that demotes reasoning to
ensure parsed "reasoning + tool_call" payloads produce
AssistantContent::Reasoning, and add a regression test that parses a provider
response with reasoning + tool_call and round-trips it through the serializer to
assert reasoning_content is present.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ce12a8b1-f7b9-431f-9a63-f50cfdd782d1
📒 Files selected for processing (2)
src/llm/model.rssrc/tools/send_message_to_another_channel.rs
Summary
Preserve assistant reasoning content when converting assistant messages that include tool calls. This fixes provider request failures when reasoning-enabled models expect
reasoning_contenton assistant tool-call messages.What Changed
src/llm/model.rsReasoningContentimport from Rigconvert_messages_to_openai()to collectAssistantContent::Reasoningtext during message conversionconvert_messages_to_openai_responses()with the same reasoning preservation logiccollect_reasoning_text_parts()to safely extract reasoning text from various content typesreasoning_contenton assistant messages when reasoning is present, including in tool-call flowssrc/tools/send_message_to_another_channel.rsWhy
Reasoning-enabled models (like those using Moonshot AI) emit reasoning content and expect it to be carried through serialization. Without
reasoning_contentin the OpenAI request format, these models fail with errors like:thinking is enabled but reasoning_content is missing in assistant tool call message at index 4. This ensures reasoning context is preserved end-to-end, preventing both context loss and outright request failures.Testing
Added unit tests in
src/llm/model.rs:convert_messages_to_openai_preserves_reasoning_for_tool_calls— verifies reasoning + tool calls are serialized correctlyconvert_messages_to_openai_responses_preserves_reasoning_for_tool_calls— same for responses API pathconvert_messages_to_openai_preserves_empty_reasoning_content_for_redacted_reasoning— edge case with redacted reasoningconvert_messages_to_openai_responses_preserves_empty_reasoning_content_for_redacted_reasoning— edge case for responses APINote
This PR preserves assistant reasoning content in message serialization for both OpenAI request and response API conversion paths. Four new unit tests verify correct handling of reasoning in tool-call messages and edge cases with redacted/encrypted reasoning content.
Written by Tembo for commit 1e89e96.