Skip to content

Fix(openai)#3747

Merged
nina-kollman merged 10 commits intotraceloop:mainfrom
elinacse:OpenAiFix
Mar 8, 2026
Merged

Fix(openai)#3747
nina-kollman merged 10 commits intotraceloop:mainfrom
elinacse:OpenAiFix

Conversation

@elinacse
Copy link
Contributor

@elinacse elinacse commented Mar 1, 2026

  • I have added tests that cover my changes.
  • If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
  • PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....
  • (If applicable) I have updated the documentation accordingly.

Summary by CodeRabbit

  • Bug Fixes
    • Fixed handling of missing or null tool call indices in stream processing to prevent potential errors when appending tool calls.
    • Improved tools field consistency by ensuring null values are treated as empty lists, providing uniform list-based representation across operations.

@ellipsis-dev
Copy link
Contributor

ellipsis-dev bot commented Mar 1, 2026

⚠️ This PR is too big for Ellipsis, but support for larger PRs is coming soon. If you want us to prioritize this feature, let us know at help@ellipsis.dev


Generated with ❤️ by ellipsis.dev

@coderabbitai
Copy link

coderabbitai bot commented Mar 1, 2026

Important

Review skipped

Review was skipped due to path filters

⛔ Files ignored due to path filters (1)
  • packages/opentelemetry-instrumentation-openai/uv.lock is excluded by !**/*.lock

CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including **/dist/** will override the default block on the dist directory, by removing the pattern from both the lists.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 595f1ff5-25b9-4420-9326-381cb4d4f4fe

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

The changes improve null-safety in OpenAI instrumentation by addressing missing or null values. Tool call indices now default to 0 when absent, and the tools field is always represented as a list rather than occasionally being None.

Changes

Cohort / File(s) Summary
Tool Call Index Handling
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/chat_wrappers.py
Added null-coalescing to tool_call index extraction, defaulting to 0 when missing, preventing potential IndexError during tool_calls array append operations.
Tools Field Normalization
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py
Modified tools computation to treat None as empty list and always populate the tools field with a list value instead of conditionally setting to None, ensuring consistent list-based representation.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

A rabbit hops through data streams so fine,
Where null and void once caused a grand design,
Now indices default when none appear,
And lists stay true—no None to fear! 🐰✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title "Fix(openai)" is extremely vague and does not describe the actual changes. It lacks specificity about what was fixed (tool_call index handling and tools field representation). Revise the title to be more descriptive, such as "Fix(openai): Handle null tool_call indices and normalize tools field representation" to clearly indicate the specific issues being addressed.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@elinacse elinacse changed the title Open ai fix Fix(openai) Mar 1, 2026
@elinacse
Copy link
Contributor Author

elinacse commented Mar 1, 2026

Fix for the Issue #3523 and #3697

@galkleinman @nina-kollman please help in review of this pR this covers above 2 issues tagged. Thanks !

@elinacse
Copy link
Contributor Author

elinacse commented Mar 1, 2026

  1. TypeError when using responses.retrieve() with tools=None
  2. TypeError: int() argument must be a string ... not 'NoneType' in _accumulate_stream_items when streaming tool calls from Gemini
    This PR fixes these 2 issues .

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/chat_wrappers.py (1)

1176-1182: ⚠️ Potential issue | 🔴 Critical

Guard tool-call index parsing and list growth to avoid stream crashes.

At Line 1176, tool_call["index"] can still throw on missing keys or malformed values, and the single append at Line 1177 can still leave tool_calls[i] out of range when i > len(list). This can raise during streaming and bubble up to callers.

💡 Proposed fix
             for tool_call in tool_calls:
-                i = int(tool_call["index"] or 0)
-                if len(complete_choice["message"]["tool_calls"]) <= i:
-                    complete_choice["message"]["tool_calls"].append(
-                        {"id": "", "function": {"name": "", "arguments": ""}}
-                    )
+                raw_index = tool_call.get("index", 0)
+                try:
+                    i = int(raw_index) if raw_index is not None else 0
+                except (TypeError, ValueError):
+                    i = 0
+                if i < 0:
+                    i = 0
+
+                while len(complete_choice["message"]["tool_calls"]) <= i:
+                    complete_choice["message"]["tool_calls"].append(
+                        {"id": "", "function": {"name": "", "arguments": ""}}
+                    )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/chat_wrappers.py`
around lines 1176 - 1182, The code that computes i = int(tool_call["index"] or
0) and then appends a single default entry can raise if "index" is missing, not
an int, or larger than the current list length; update the logic around
tool_call, complete_choice, and span_tool_call to (1) safely read and parse
index from tool_call by using tool_call.get("index") and catching
ValueError/TypeError, defaulting to 0 and clamping negatives to 0, and (2) grow
complete_choice["message"]["tool_calls"] in a loop (e.g., while len(...) <= i:
append the default {"id":"", "function":{"name":"", "arguments":""}}) so that
accessing span_tool_call = complete_choice["message"]["tool_calls"][i] cannot go
out of range.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In
`@packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/chat_wrappers.py`:
- Around line 1176-1182: The code that computes i = int(tool_call["index"] or 0)
and then appends a single default entry can raise if "index" is missing, not an
int, or larger than the current list length; update the logic around tool_call,
complete_choice, and span_tool_call to (1) safely read and parse index from
tool_call by using tool_call.get("index") and catching ValueError/TypeError,
defaulting to 0 and clamping negatives to 0, and (2) grow
complete_choice["message"]["tool_calls"] in a loop (e.g., while len(...) <= i:
append the default {"id":"", "function":{"name":"", "arguments":""}}) so that
accessing span_tool_call = complete_choice["message"]["tool_calls"][i] cannot go
out of range.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 37a7e8e and 1c06949.

⛔ Files ignored due to path filters (1)
  • packages/opentelemetry-instrumentation-openai/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/chat_wrappers.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py

@elinacse
Copy link
Contributor Author

elinacse commented Mar 2, 2026

@galkleinman Can you review the PR for the Issue #3523 and #3697?

@elinacse
Copy link
Contributor Author

elinacse commented Mar 4, 2026

@nirga @galkleinman @nina-kollman Can you please review this pr for the Issue #3523 and #3697?

@nina-kollman nina-kollman merged commit 6735fdc into traceloop:main Mar 8, 2026
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants