Skip to content

fix: preserve reasoning in assistant tool-call conversion#393

Open
skulldogged wants to merge 3 commits intospacedriveapp:mainfrom
skulldogged:fix/reasoning-tool-call-serialization
Open

fix: preserve reasoning in assistant tool-call conversion#393
skulldogged wants to merge 3 commits intospacedriveapp:mainfrom
skulldogged:fix/reasoning-tool-call-serialization

Conversation

@skulldogged
Copy link
Contributor

@skulldogged skulldogged commented Mar 11, 2026

Summary

Preserve assistant reasoning content when converting assistant messages that include tool calls. This fixes provider request failures when reasoning-enabled models expect reasoning_content on assistant tool-call messages.

What Changed

src/llm/model.rs

  • Added ReasoningContent import from Rig
  • Updated convert_messages_to_openai() to collect AssistantContent::Reasoning text during message conversion
  • Updated convert_messages_to_openai_responses() with the same reasoning preservation logic
  • Added helper function collect_reasoning_text_parts() to safely extract reasoning text from various content types
  • Included reasoning_content on assistant messages when reasoning is present, including in tool-call flows
  • Added 4 new unit tests covering reasoning-bearing assistant tool-calls in both conversion paths, plus edge cases with redacted/encrypted reasoning

src/tools/send_message_to_another_channel.rs

  • Minor refactoring: replaced nested if blocks with Rust's let-else pattern for cleaner conditionals

Why

Reasoning-enabled models (like those using Moonshot AI) emit reasoning content and expect it to be carried through serialization. Without reasoning_content in the OpenAI request format, these models fail with errors like: thinking is enabled but reasoning_content is missing in assistant tool call message at index 4. This ensures reasoning context is preserved end-to-end, preventing both context loss and outright request failures.

Testing

Added unit tests in src/llm/model.rs:

  • convert_messages_to_openai_preserves_reasoning_for_tool_calls — verifies reasoning + tool calls are serialized correctly
  • convert_messages_to_openai_responses_preserves_reasoning_for_tool_calls — same for responses API path
  • convert_messages_to_openai_preserves_empty_reasoning_content_for_redacted_reasoning — edge case with redacted reasoning
  • convert_messages_to_openai_responses_preserves_empty_reasoning_content_for_redacted_reasoning — edge case for responses API

Note

This PR preserves assistant reasoning content in message serialization for both OpenAI request and response API conversion paths. Four new unit tests verify correct handling of reasoning in tool-call messages and edge cases with redacted/encrypted reasoning content.

Written by Tembo for commit 1e89e96.

Copilot AI review requested due to automatic review settings March 11, 2026 00:49
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 11, 2026

Walkthrough

Adds extraction and preservation of Assistant reasoning content into OpenAI-style message conversions by collecting textual reasoning parts and attaching them as a reasoning_content field; updates conversion logic to handle reasoning-only messages and tool-call combinations; refactors a signal-target flow in a messaging helper.

Changes

Cohort / File(s) Summary
Reasoning content & conversions
src/llm/model.rs
Introduce collect_reasoning_text_parts; accumulate AssistantContent::Reasoning parts into reasoning_parts; set reasoning_content on produced OpenAI-style messages when present; handle cases where content is null but reasoning or tool calls exist; preserve function_call/tool interactions in responses.
Tests for reasoning behavior
src/llm/model.rs (tests)
Added tests covering reasoning preservation and attachment across standard and tool-call scenarios, including streaming vs non-streaming paths and filtering of encrypted/redacted reasoning blocks.
Signal prefix handling refactor
src/tools/send_message_to_another_channel.rs
Consolidated explicit Signal prefix guard and target.adapter assignment; simplified implicit Signal shorthand parsing and broadcast flow by merging parsing and broadcasting branches.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'fix: preserve reasoning in assistant tool-call conversion' clearly and concisely describes the main change: preserving reasoning content in assistant tool-call message conversion.
Description check ✅ Passed The description is comprehensive and directly related to the changeset, detailing the specific changes made, their purpose, and testing approach for reasoning preservation in assistant tool-call conversion.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the OpenAI (Chat Completions + Responses API) request conversion to preserve assistant “reasoning” parts when the assistant message includes tool calls, and adds regression tests around that behavior.

Changes:

  • Extend OpenAI Chat Completions message conversion to extract AssistantContent::Reasoning into a new reasoning_content field.
  • Extend OpenAI Responses API input conversion to carry assistant reasoning alongside tool calls (and refactor tool calls to be appended after the assistant message).
  • Add tests ensuring reasoning is preserved for assistant tool-call turns in both conversion paths.
Comments suppressed due to low confidence (2)

src/llm/model.rs:1396

  • reasoning_content is not part of the standard OpenAI Chat Completions request message schema (the usual fields are role, content, tool_calls, etc.). Sending this extra key can cause strict OpenAI-compatible providers to reject the request with a 4xx. Consider either (a) omitting assistant reasoning from outbound requests, or (b) folding the collected reasoning text into content when you need to preserve it for tool-call turns (e.g., when content would otherwise be absent).
                                    "arguments": args_string,
                                }
                            }));

src/llm/model.rs:1510

  • For the Responses API input format, message items in this function are otherwise modeled with a content array (e.g., input_text/output_text). Emitting an assistant message that only contains reasoning_content (and no content) is likely to be invalid for OpenAI/strict OpenAI-compatible Responses endpoints. A safer representation would be to either drop reasoning on outbound requests, or encode it as an output_text content item (or another supported content block type) rather than adding a non-standard top-level field.
                                .unwrap_or(&tool_call.id);
                            function_calls.push(serde_json::json!({
                                "type": "function_call",
                                "name": tool_call.function.name,
                                "arguments": arguments,
                                "call_id": call_id,

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@skulldogged skulldogged marked this pull request as draft March 11, 2026 00:56
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/llm/model.rs`:
- Around line 1408-1424: The current collect_reasoning_text_parts(reasoning:
&rig::message::Reasoning) flattens reasoning into plain text; instead preserve
reasoning as a distinct InputContent::Reasoning item so Responses API roundtrips
correctly: update code paths that call collect_reasoning_text_parts to construct
and emit an InputContent::Reasoning (including the original reasoning.id, a
summary array of Summary items with type "summary_text" populated from non-empty
summary/text parts, plus any encrypted_content and status) rather than
concatenating strings; ensure encrypted and redacted variants are preserved on
the InputContent::Reasoning (and adjust callers that expect Vec<String>
accordingly).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0de8167e-7574-49ac-9201-2b4f61839113

📥 Commits

Reviewing files that changed from the base of the PR and between 0a97964 and dfe45f6.

📒 Files selected for processing (1)
  • src/llm/model.rs

@skulldogged skulldogged marked this pull request as ready for review March 11, 2026 01:30
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/llm/model.rs (1)

1367-1412: ⚠️ Potential issue | 🟠 Major

The fix still misses parsed reasoning+tool-call turns.

These serializers only emit reasoning_content when history already contains AssistantContent::Reasoning, but the non-stream parsers later in this file still strip that distinction on tool-call responses: parse_openai_response() drops reasoning once tool_calls make assistant_content non-empty, and parse_openai_responses_response() demotes reasoning output into AssistantContent::Text. That means an assistant tool-call turn parsed from a provider response can still round-trip back out without reasoning_content, so the provider rejection this PR targets can reappear on the next tool submission. Please add a parse→convert regression for reasoning + tool_call payloads and preserve reasoning as AssistantContent::Reasoning through those parser paths.

Regression test sketch
+    #[test]
+    fn parse_openai_responses_roundtrips_reasoning_content_with_tool_calls() {
+        let response = parse_openai_responses_response(
+            serde_json::json!({
+                "output": [
+                    {
+                        "type": "reasoning",
+                        "summary": [{"text": "inspect identity files"}]
+                    },
+                    {
+                        "type": "function_call",
+                        "call_id": "call_1",
+                        "name": "file",
+                        "arguments": "{\"operation\":\"list\",\"path\":\".\"}"
+                    }
+                ],
+                "usage": {
+                    "input_tokens": 1,
+                    "output_tokens": 1,
+                    "input_tokens_details": {"cached_tokens": 0}
+                }
+            }),
+            "OpenAI",
+        )
+        .expect("valid response");
+
+        let history = OneOrMany::one(Message::Assistant {
+            id: None,
+            content: response.choice,
+        });
+
+        let converted = convert_messages_to_openai_responses(&history);
+        assert_eq!(converted[0]["reasoning_content"], "inspect identity files");
+    }

Based on learnings, flattening AssistantContent::Reasoning into reasoning_content is the intended scoped fix here, so the missing piece is retaining AssistantContent::Reasoning through the parser path.

Also applies to: 1486-1539, 3155-3240

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/llm/model.rs` around lines 1367 - 1412, The serializers currently emit
reasoning_content only when AssistantContent::Reasoning exists, but
parse_openai_response and parse_openai_responses_response demote reasoning into
AssistantContent::Text when tool_calls are present, losing the reasoning marker;
update those parser paths so that when a parsed response contains both reasoning
text and tool_call metadata you convert/retain that turn as
AssistantContent::Reasoning (not Text), preserving reasoning parts (the same
shape consumed by collect_reasoning_text_parts) and still emitting tool_calls
and reasoning_content in the outgoing serializer; adjust
parse_openai_response(), parse_openai_responses_response(), and any helper that
demotes reasoning to ensure parsed "reasoning + tool_call" payloads produce
AssistantContent::Reasoning, and add a regression test that parses a provider
response with reasoning + tool_call and round-trips it through the serializer to
assert reasoning_content is present.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@src/llm/model.rs`:
- Around line 1367-1412: The serializers currently emit reasoning_content only
when AssistantContent::Reasoning exists, but parse_openai_response and
parse_openai_responses_response demote reasoning into AssistantContent::Text
when tool_calls are present, losing the reasoning marker; update those parser
paths so that when a parsed response contains both reasoning text and tool_call
metadata you convert/retain that turn as AssistantContent::Reasoning (not Text),
preserving reasoning parts (the same shape consumed by
collect_reasoning_text_parts) and still emitting tool_calls and
reasoning_content in the outgoing serializer; adjust parse_openai_response(),
parse_openai_responses_response(), and any helper that demotes reasoning to
ensure parsed "reasoning + tool_call" payloads produce
AssistantContent::Reasoning, and add a regression test that parses a provider
response with reasoning + tool_call and round-trips it through the serializer to
assert reasoning_content is present.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ce12a8b1-f7b9-431f-9a63-f50cfdd782d1

📥 Commits

Reviewing files that changed from the base of the PR and between dfe45f6 and 1e89e96.

📒 Files selected for processing (2)
  • src/llm/model.rs
  • src/tools/send_message_to_another_channel.rs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants