Conversation
|
Fix for the issue #3701 |
There was a problem hiding this comment.
Important
Looks good to me! 👍
Reviewed everything up to 98a66a9 in 16 seconds. Click for details.
- Reviewed
96lines of code in3files - Skipped
0files when reviewing. - Skipped posting
0draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
Workflow ID: wflow_CjfEUkT3K4EEVDMi
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughRelocated Anthropic async stream method wrappers for beta and bedrock from the async-wrapper list into the sync-wrapper list so methods returning AsyncMessageStreamManager use the synchronous wrapper path; refined metrics attribute builders to omit null model values; added an event-stream cassette and async tests exercising the legacy stream-manager flow. Changes
Sequence Diagram(s)(omitted — changes are configuration and small control-flow adjustments that don't introduce a new multi-component sequential feature) Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related issues
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/utils.py`:
- Around line 171-178: The sync metrics builder shared_metrics_attributes
currently always sets GenAIAttributes.GEN_AI_RESPONSE_MODEL even when model is
None, which OTLP rejects; update shared_metrics_attributes to match
ashared_metrics_attributes by only adding GenAIAttributes.GEN_AI_RESPONSE_MODEL
to attrs when model is not None (i.e., after computing model from
common_attributes/response), leaving the rest of attrs merging with
common_attributes and GenAIAttributes.GEN_AI_SYSTEM = GEN_AI_SYSTEM_ANTHROPIC
unchanged.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
packages/opentelemetry-instrumentation-anthropic/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/__init__.pypackages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/utils.py
...ges/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/utils.py
Show resolved
Hide resolved
There was a problem hiding this comment.
🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-anthropic/tests/test_messages.py (1)
2917-2953: Test coverage is incomplete compared to similar streaming tests.This test is missing several key assertions that
test_async_anthropic_message_stream_manager_legacy(lines 2645-2708) includes:
- Completion content verification (
GEN_AI_COMPLETION.0.content == response_content)- Completion role verification
- Token usage assertions (
GEN_AI_USAGE_INPUT_TOKENS, total tokens calculation)- Response ID assertion
- Metrics verification via
verify_metrics()These assertions validate that the instrumentation correctly captures the streamed response attributes for the beta API path.
♻️ Suggested additions to improve test coverage
assert anthropic_span.attributes[f"{GenAIAttributes.GEN_AI_PROMPT}.0.role"] == "user" + assert ( + anthropic_span.attributes.get(f"{GenAIAttributes.GEN_AI_COMPLETION}.0.content") + == response_content + ) + assert ( + anthropic_span.attributes.get(f"{GenAIAttributes.GEN_AI_COMPLETION}.0.role") + == "assistant" + ) + assert anthropic_span.attributes[GenAIAttributes.GEN_AI_USAGE_INPUT_TOKENS] == 17 + assert ( + anthropic_span.attributes[GenAIAttributes.GEN_AI_USAGE_OUTPUT_TOKENS] + + anthropic_span.attributes[GenAIAttributes.GEN_AI_USAGE_INPUT_TOKENS] + == anthropic_span.attributes[SpanAttributes.LLM_USAGE_TOTAL_TOKENS] + ) + + metrics_data = reader.get_metrics_data() + resource_metrics = metrics_data.resource_metrics + verify_metrics(resource_metrics, "claude-3-5-haiku-20241022") logs = log_exporter.get_finished_logs()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/opentelemetry-instrumentation-anthropic/tests/test_messages.py` around lines 2917 - 2953, The beta-streaming test is missing the same assertions as the non-beta streaming test: after collecting response_content and extracting anthropic_span, add assertions that anthropic_span.attributes[f"{GenAIAttributes.GEN_AI_COMPLETION}.0.content"] == response_content and that GEN_AI_COMPLETION role equals the expected role (e.g., "assistant"); assert token usage attributes like GEN_AI_USAGE_INPUT_TOKENS and total tokens calculation (input + output) match expected values from the mocked reader/response; assert the response id attribute (e.g., GenAIAttributes.GEN_AI_RESPONSE_ID) equals the mocked response id; and call verify_metrics() at the end to validate metrics—use the same attribute keys and helper names (response_content, anthropic_span, GenAIAttributes, GEN_AI_COMPLETION, GEN_AI_USAGE_INPUT_TOKENS, verify_metrics) to mirror test_async_anthropic_message_stream_manager_legacy.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/opentelemetry-instrumentation-anthropic/tests/test_messages.py`:
- Around line 2917-2953: The beta-streaming test is missing the same assertions
as the non-beta streaming test: after collecting response_content and extracting
anthropic_span, add assertions that
anthropic_span.attributes[f"{GenAIAttributes.GEN_AI_COMPLETION}.0.content"] ==
response_content and that GEN_AI_COMPLETION role equals the expected role (e.g.,
"assistant"); assert token usage attributes like GEN_AI_USAGE_INPUT_TOKENS and
total tokens calculation (input + output) match expected values from the mocked
reader/response; assert the response id attribute (e.g.,
GenAIAttributes.GEN_AI_RESPONSE_ID) equals the mocked response id; and call
verify_metrics() at the end to validate metrics—use the same attribute keys and
helper names (response_content, anthropic_span, GenAIAttributes,
GEN_AI_COMPLETION, GEN_AI_USAGE_INPUT_TOKENS, verify_metrics) to mirror
test_async_anthropic_message_stream_manager_legacy.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
packages/opentelemetry-instrumentation-anthropic/tests/cassettes/test_messages/test_async_anthropic_beta_message_stream_manager_legacy.yamlpackages/opentelemetry-instrumentation-anthropic/tests/test_messages.py
✅ Files skipped from review due to trivial changes (1)
- packages/opentelemetry-instrumentation-anthropic/tests/cassettes/test_messages/test_async_anthropic_beta_message_stream_manager_legacy.yaml
|
@galkleinman @nina-kollman please help in review of this PR for this issue #3701. Thanks ! |
|
@galkleinman please help in review of this PR for this issue #3701. Thanks ! |
|
@nirga @galkleinman @nina-kollman Can you please review this pr for the Issue #3701. |
feat(instrumentation): ...orfix(instrumentation): ....Important
Fixes async stream handling in Anthropic instrumentation by using a sync wrapper and updates response attribute handling to avoid None values.
__init__.pyusing a sync wrapper to maintain compatibility with async context managers.ashared_metrics_attributes()inutils.pyto avoid None values for model attributes.uv.lockfrom0.52.4to0.52.6.This description was created by
for 98a66a9. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
Bug Fixes
Improvements
Tests