Conversation
|
Generated with ❤️ by ellipsis.dev |
|
Important Review skippedReview was skipped due to path filters ⛔ Files ignored due to path filters (1)
CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThe changes improve null-safety in OpenAI instrumentation by addressing missing or null values. Tool call indices now default to 0 when absent, and the tools field is always represented as a list rather than occasionally being None. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Fix for the Issue #3523 and #3697 @galkleinman @nina-kollman please help in review of this pR this covers above 2 issues tagged. Thanks ! |
|
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/chat_wrappers.py (1)
1176-1182:⚠️ Potential issue | 🔴 CriticalGuard tool-call index parsing and list growth to avoid stream crashes.
At Line 1176,
tool_call["index"]can still throw on missing keys or malformed values, and the single append at Line 1177 can still leavetool_calls[i]out of range wheni > len(list). This can raise during streaming and bubble up to callers.💡 Proposed fix
for tool_call in tool_calls: - i = int(tool_call["index"] or 0) - if len(complete_choice["message"]["tool_calls"]) <= i: - complete_choice["message"]["tool_calls"].append( - {"id": "", "function": {"name": "", "arguments": ""}} - ) + raw_index = tool_call.get("index", 0) + try: + i = int(raw_index) if raw_index is not None else 0 + except (TypeError, ValueError): + i = 0 + if i < 0: + i = 0 + + while len(complete_choice["message"]["tool_calls"]) <= i: + complete_choice["message"]["tool_calls"].append( + {"id": "", "function": {"name": "", "arguments": ""}} + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/chat_wrappers.py` around lines 1176 - 1182, The code that computes i = int(tool_call["index"] or 0) and then appends a single default entry can raise if "index" is missing, not an int, or larger than the current list length; update the logic around tool_call, complete_choice, and span_tool_call to (1) safely read and parse index from tool_call by using tool_call.get("index") and catching ValueError/TypeError, defaulting to 0 and clamping negatives to 0, and (2) grow complete_choice["message"]["tool_calls"] in a loop (e.g., while len(...) <= i: append the default {"id":"", "function":{"name":"", "arguments":""}}) so that accessing span_tool_call = complete_choice["message"]["tool_calls"][i] cannot go out of range.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In
`@packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/chat_wrappers.py`:
- Around line 1176-1182: The code that computes i = int(tool_call["index"] or 0)
and then appends a single default entry can raise if "index" is missing, not an
int, or larger than the current list length; update the logic around tool_call,
complete_choice, and span_tool_call to (1) safely read and parse index from
tool_call by using tool_call.get("index") and catching ValueError/TypeError,
defaulting to 0 and clamping negatives to 0, and (2) grow
complete_choice["message"]["tool_calls"] in a loop (e.g., while len(...) <= i:
append the default {"id":"", "function":{"name":"", "arguments":""}}) so that
accessing span_tool_call = complete_choice["message"]["tool_calls"][i] cannot go
out of range.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
packages/opentelemetry-instrumentation-openai/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/chat_wrappers.pypackages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/v1/responses_wrappers.py
|
@galkleinman Can you review the PR for the Issue #3523 and #3697? |
|
@nirga @galkleinman @nina-kollman Can you please review this pr for the Issue #3523 and #3697? |
feat(instrumentation): ...orfix(instrumentation): ....Summary by CodeRabbit