Skip to content

[WIP] LCORE-1392: Include LS providers in LCS image#1360

Open
asimurka wants to merge 1 commit intolightspeed-core:mainfrom
asimurka:include_ls_providers_in_lcs_image
Open

[WIP] LCORE-1392: Include LS providers in LCS image#1360
asimurka wants to merge 1 commit intolightspeed-core:mainfrom
asimurka:include_ls_providers_in_lcs_image

Conversation

@asimurka
Copy link
Contributor

@asimurka asimurka commented Mar 19, 2026

Description

Include LS providers in LCS image

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement
  • Benchmarks improvement

Tools used to create PR

Identify any AI code assistants used in this PR (for transparency and review context)

  • Assisted-by: Cursor

Related Tickets & Documents

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • New Features

    • Added a question-validity safety provider and matching shield to validate queries and return a configurable invalid-question response.
    • Enabled provisioning and runtime loading of external providers via image build and an environment-configured providers directory.
  • Dependencies

    • Updated and tightened HTTP-related dependency constraints and added packages required for the providers runtime; refreshed dependency pins.
  • Chores

    • Added an artifacts lock and tooling to regenerate its checksum; updated CI and compose configs to source the external providers directory.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 19, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds build-time provisioning of lightspeed-providers into the container image, updates Python dependency pins and hashes, adds artifact lock tooling and script, exposes external providers dir via environment variables, updates Tekton prefetch inputs, and registers a new inline safety provider and shield.

Changes

Cohort / File(s) Summary
Container provisioning
Containerfile
Adds ARG LIGHTSPEED_PROVIDERS_COMMIT, build step to prefer cached /cachi2/output/deps/.../lightspeed-providers.zip or download pinned GitHub archive, unzip and relocate lightspeed_stack_providers/app-root/ and resources/external_providers/app-root/providers.d, cleanup, and sets ENV PYTHONPATH="/app-root".
Artifact lock & tooling
Makefile, artifacts.lock.yaml, scripts/generate-artifacts-lock.sh
Adds konflux-artifacts-lock Make target; new artifacts.lock.yaml entry for lightspeed-providers with download URL, filename, checksum; adds script to download artifact and update checksum in the lockfile.
Compose & runtime env
docker-compose-library.yaml, docker-compose.yaml, tests/e2e/configs/run-ci.yaml
Library compose: sets EXTERNAL_PROVIDERS_DIR=/app-root/providers.d; local compose: adds EXTERNAL_PROVIDERS_DIR=""; CI e2e config reads external_providers_dir from ${env.EXTERNAL_PROVIDERS_DIR}.
Safety provider & shields
run.yaml, tests/e2e/configs/run-ci.yaml
Adds inline provider lightspeed_question_validity (inline::lightspeed_question_validity, model_id: openai/gpt-4o-mini, temperature: 0.0, invalid_question_response), and registers lightspeed_question_validity-shield.
Tekton prefetch inputs
.tekton/lightspeed-stack-pull-request.yaml, .tekton/lightspeed-stack-push.yaml
Appends { "type": "generic", "path": "." } to spec.params.prefetch-input arrays; pip binary package string unchanged in effect.
Python deps & hashes
pyproject.toml, requirements-build.txt, requirements.hashes.source.txt, requirements.hashes.wheel.txt
Bumps aiohttp min version and relaxes urllib3 pin; adds provider-related packages to dependency-groups.llslibdev; updates autogenerated build header; adds/updates multiple pinned package versions and hashes (including adding attrs==26.1.0 to source hashes and removing attrs==25.4.0 from wheel hashes).

Sequence Diagram

sequenceDiagram
    participant Builder
    participant Cache
    participant GitHub
    participant FS
    participant App

    Builder->>+Cache: check /cachi2/output/deps/.../lightspeed-providers.zip
    alt cached ZIP present
        Cache-->>-Builder: return ZIP
    else cached ZIP missing
        Builder->>+GitHub: download git-archive at LIGHTSPEED_PROVIDERS_COMMIT
        GitHub-->>-Builder: return ZIP
    end

    Builder->>FS: unzip to temp dir
    Builder->>FS: move lightspeed_stack_providers → /app-root/
    Builder->>FS: move resources/external_providers → /app-root/providers.d
    Builder->>FS: remove temp/ZIP
    Builder->>FS: set ENV PYTHONPATH="/app-root"

    App->>FS: import lightspeed_stack_providers from /app-root
    App->>FS: load external providers from /app-root/providers.d
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Suggested reviewers

  • tisnik
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title '[WIP] LCORE-1392: Include LS providers in LCS image' clearly summarizes the main objective of the changeset: including Lightspeed providers in the Lightspeed Core Stack image.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
Containerfile (1)

40-61: Consider adding checksum verification for non-hermetic downloads.

When the hermetic cachi2 artifact isn't available and the zip is downloaded directly from GitHub, there's no checksum verification. This could be a supply chain security concern for local development builds.

🛡️ Proposed fix to add checksum verification
 ARG LIGHTSPEED_PROVIDERS_COMMIT=9e073aaaa43a8a5bac38a3bbddbe6cf24842847b
+ARG LIGHTSPEED_PROVIDERS_CHECKSUM=sha256:52cd105c351c3645b0b0fbc7c06df5ca200274b843f82f97dfbbb8918aac3a11
 RUN set -eux; \
     ZIP_PATH="/tmp/lightspeed-providers.zip"; \
     EXTRACT_DIR="/tmp/providers"; \
     \
     # Use hermetic pre-fetched artifact if available, otherwise download pinned commit
     if [ -f "/cachi2/output/deps/generic/lightspeed-providers.zip" ]; then \
         cp "/cachi2/output/deps/generic/lightspeed-providers.zip" "${ZIP_PATH}"; \
     else \
         curl -fL "https://github.com/lightspeed-core/lightspeed-providers/archive/${LIGHTSPEED_PROVIDERS_COMMIT}.zip" -o "${ZIP_PATH}"; \
+        echo "${LIGHTSPEED_PROVIDERS_CHECKSUM#sha256:}  ${ZIP_PATH}" | sha256sum -c -; \
     fi; \

Note: This would require keeping the checksum in sync with artifacts.lock.yaml. Alternatively, document that local builds should use the hermetic path for verified artifacts.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Containerfile` around lines 40 - 61, When the hermetic artifact is not found
and the zip is downloaded via the else branch (using LIGHTSPEED_PROVIDERS_COMMIT
into ZIP_PATH), add a checksum verification step: obtain the expected SHA256
(from a pinned source such as artifacts.lock.yaml or an env var like
LIGHTSPEED_PROVIDERS_SHA256), compute the downloaded file's SHA256 (using
sha256sum) and compare, and exit non‑zero if it doesn't match so the build
fails; implement this check immediately after curl and before unzip in the RUN
block that references ZIP_PATH and LIGHTSPEED_PROVIDERS_COMMIT.
pyproject.toml (1)

191-197: Duplicate dependencies in llslibdev group.

Several of the newly added dependencies are already present in the llslibdev group:

  1. numpy>=1.24.0 (line 194) conflicts with numpy==2.3.5 (line 172) - the exact pin will take precedence, making this addition redundant.
  2. mcp>=1.23.0 (line 195) duplicates the existing entry at line 174.

Consider removing the duplicates to keep the dependency list clean.

♻️ Proposed cleanup
     # Lightspeed providers
     "httpx>=0.27.0",
     "pydantic>=2.10.6",
-    "numpy>=1.24.0",
-    "mcp>=1.23.0",
     "protobuf>=6.33.5",
     "filelock>=3.20.3",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyproject.toml` around lines 191 - 197, Remove the duplicate/conflicting
dependencies from the llslibdev group: delete the newly added "numpy>=1.24.0"
(keep the existing pinned "numpy==2.3.5") and remove "mcp>=1.23.0" since mcp is
already listed; ensure the remaining entries use the intended, single source of
truth for numpy and mcp so there are no conflicting or redundant pins.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/generate-artifacts-lock.sh`:
- Around line 17-27: The script currently parses URL and computes SUM but uses a
broad sed that can replace any checksum in the whole file; change the checksum
rewrite to only target the checksum entry associated with the same artifact
block that contained the parsed download_url. Concretely, after extracting URL
into the variable URL and computing SUM into SUM (and using LOCK and TMP),
locate the block/line number for the specific download_url (e.g., by finding the
line number of the matched download_url), determine the block range for that
artifact (until the next top-level artifact entry), and perform the checksum
replacement only within that range (replace the sed -i
"s/^\([[:space:]]*checksum: \)sha256:[a-fA-F0-9]*/\1sha256:${SUM}/" "${LOCK}"
with a targeted range-based sed/perl/awk invocation that uses the download_url
line number and updates the checksum for that artifact only). Ensure you still
use URL, TMP, SUM and LOCK variables as before.

---

Nitpick comments:
In `@Containerfile`:
- Around line 40-61: When the hermetic artifact is not found and the zip is
downloaded via the else branch (using LIGHTSPEED_PROVIDERS_COMMIT into
ZIP_PATH), add a checksum verification step: obtain the expected SHA256 (from a
pinned source such as artifacts.lock.yaml or an env var like
LIGHTSPEED_PROVIDERS_SHA256), compute the downloaded file's SHA256 (using
sha256sum) and compare, and exit non‑zero if it doesn't match so the build
fails; implement this check immediately after curl and before unzip in the RUN
block that references ZIP_PATH and LIGHTSPEED_PROVIDERS_COMMIT.

In `@pyproject.toml`:
- Around line 191-197: Remove the duplicate/conflicting dependencies from the
llslibdev group: delete the newly added "numpy>=1.24.0" (keep the existing
pinned "numpy==2.3.5") and remove "mcp>=1.23.0" since mcp is already listed;
ensure the remaining entries use the intended, single source of truth for numpy
and mcp so there are no conflicting or redundant pins.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: b0d59f3d-a2ae-44ba-a321-d72b694fb402

📥 Commits

Reviewing files that changed from the base of the PR and between 5a52503 and 947cadd.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (10)
  • Containerfile
  • Makefile
  • artifacts.lock.yaml
  • docker-compose-library.yaml
  • pyproject.toml
  • requirements-build.txt
  • requirements.hashes.source.txt
  • requirements.hashes.wheel.txt
  • run.yaml
  • scripts/generate-artifacts-lock.sh
💤 Files with no reviewable changes (1)
  • requirements.hashes.wheel.txt

@asimurka asimurka force-pushed the include_ls_providers_in_lcs_image branch from 947cadd to aa93131 Compare March 19, 2026 15:45
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
Containerfile (1)

53-58: Consider refactoring ROOT_DIR logic for future-proofing only; current artifact uses GitHub archive which always wraps.

The current artifact in artifacts.lock.yaml is a GitHub archive (type: git-archive with GitHub's archive endpoint), which always produces a single wrapped directory named lightspeed-providers-{commit-hash}. Therefore, lines 57-58 will not fail with the current configuration.

However, if the artifact source is changed to a tool that produces flat archives (multiple root entries), the ls -1 logic would capture multiple lines, causing ROOT_DIR to contain spaces and the subsequent mv commands to fail. The proposed defensive check would protect against such future changes, but this is not a current issue.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Containerfile` around lines 53 - 58, The current ROOT_DIR assignment using
ROOT_DIR="$(ls -1 "${EXTRACT_DIR}")" can produce multiple lines (and spaces) if
the archive is flat; update the logic to robustly select a single directory name
and fail fast if ambiguous: e.g., list only top-level directories under
EXTRACT_DIR (use find "${EXTRACT_DIR}" -mindepth 1 -maxdepth 1 -type d -printf
'%f\n'), capture the first entry into ROOT_DIR safely (avoid word-splitting),
and if more than one top-level entry exists, print an error and exit; then use
the quoted mv commands that reference "${EXTRACT_DIR}/${ROOT_DIR}" as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@Containerfile`:
- Around line 44-49: The current Containerfile shell block only checks for the
zip at "/cachi2/output/deps/generic/lightspeed-providers.zip" and falls back to
downloading via curl, which masks misconfigured mounted /cachi2; change the
logic to fail fast when the cachi2 mount is present but the artifact is missing
by checking for "/cachi2/cachi2.env" before attempting curl, and if that file
exists and the zip (${ZIP_PATH}) is missing, emit a clear error and exit
non-zero rather than invoking curl; keep the existing behavior of downloading
from GitHub only when the cachi2 marker is absent, using the same variables
(${ZIP_PATH} and ${LIGHTSPEED_PROVIDERS_COMMIT}).

In `@tests/e2e/configs/run-ci.yaml`:
- Line 18: The CI config is passing an empty EXTERNAL_PROVIDERS_DIR so
external_providers_dir ends up blank; update the CI environment to export
EXTERNAL_PROVIDERS_DIR with the actual providers directory (e.g.,
/app-root/providers.d) or make external_providers_dir reference the correct path
so the new external provider can be found; ensure the variable name
EXTERNAL_PROVIDERS_DIR and the config key external_providers_dir are set
consistently with the library setup.

---

Nitpick comments:
In `@Containerfile`:
- Around line 53-58: The current ROOT_DIR assignment using ROOT_DIR="$(ls -1
"${EXTRACT_DIR}")" can produce multiple lines (and spaces) if the archive is
flat; update the logic to robustly select a single directory name and fail fast
if ambiguous: e.g., list only top-level directories under EXTRACT_DIR (use find
"${EXTRACT_DIR}" -mindepth 1 -maxdepth 1 -type d -printf '%f\n'), capture the
first entry into ROOT_DIR safely (avoid word-splitting), and if more than one
top-level entry exists, print an error and exit; then use the quoted mv commands
that reference "${EXTRACT_DIR}/${ROOT_DIR}" as before.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: a3935801-616a-4ed1-993c-bd72af319608

📥 Commits

Reviewing files that changed from the base of the PR and between 947cadd and aa93131.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (14)
  • .tekton/lightspeed-stack-pull-request.yaml
  • .tekton/lightspeed-stack-push.yaml
  • Containerfile
  • Makefile
  • artifacts.lock.yaml
  • docker-compose-library.yaml
  • docker-compose.yaml
  • pyproject.toml
  • requirements-build.txt
  • requirements.hashes.source.txt
  • requirements.hashes.wheel.txt
  • run.yaml
  • scripts/generate-artifacts-lock.sh
  • tests/e2e/configs/run-ci.yaml
💤 Files with no reviewable changes (1)
  • requirements.hashes.wheel.txt
✅ Files skipped from review due to trivial changes (8)
  • requirements-build.txt
  • docker-compose.yaml
  • docker-compose-library.yaml
  • artifacts.lock.yaml
  • .tekton/lightspeed-stack-push.yaml
  • pyproject.toml
  • Makefile
  • requirements.hashes.source.txt
🚧 Files skipped from review as they are similar to previous changes (2)
  • run.yaml
  • scripts/generate-artifacts-lock.sh

@asimurka asimurka force-pushed the include_ls_providers_in_lcs_image branch from aa93131 to bb2052c Compare March 19, 2026 16:01
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
Containerfile (1)

45-49: ⚠️ Potential issue | 🟠 Major

Fail fast when /cachi2 is mounted but the generic artifact is missing.

If /cachi2/cachi2.env exists and lightspeed-providers.zip is absent, this silently falls back to GitHub. That hides broken hermetic inputs and makes cachi2 and non-cachi2 builds behave differently. Error out in that case instead of downloading.

Proposed change
-    if [ -f "/cachi2/output/deps/generic/lightspeed-providers.zip" ]; then \
-        cp "/cachi2/output/deps/generic/lightspeed-providers.zip" "${ZIP_PATH}"; \
-    else \
+    if [ -f "/cachi2/output/deps/generic/lightspeed-providers.zip" ]; then \
+        cp "/cachi2/output/deps/generic/lightspeed-providers.zip" "${ZIP_PATH}"; \
+    elif [ -f "/cachi2/cachi2.env" ]; then \
+        echo "Missing prefetched lightspeed-providers.zip under /cachi2/output/deps/generic" >&2; \
+        exit 1; \
+    else \
         curl -fL "https://github.com/lightspeed-core/lightspeed-providers/archive/${LIGHTSPEED_PROVIDERS_COMMIT}.zip" -o "${ZIP_PATH}"; \
     fi; \
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Containerfile` around lines 45 - 49, The current Containerfile block silently
falls back to downloading when /cachi2 is mounted but the hermetic artifact is
missing; update the conditional around the lightspeed-providers.zip copy so that
if /cachi2/cachi2.env exists and
/cachi2/output/deps/generic/lightspeed-providers.zip does not, the build fails
fast (emit an error and exit non‑zero) instead of performing the curl fallback;
otherwise keep the existing behavior that copies the file when present or
downloads into ${ZIP_PATH} using ${LIGHTSPEED_PROVIDERS_COMMIT}. Ensure the
check references /cachi2/cachi2.env, the source path
/cachi2/output/deps/generic/lightspeed-providers.zip, and uses ${ZIP_PATH} and
${LIGHTSPEED_PROVIDERS_COMMIT} as in the original block.
🧹 Nitpick comments (1)
tests/e2e/configs/run-ci.yaml (1)

43-46: Keep the safety model in sync with E2E_OPENAI_MODEL.

Lines 25-26 already make the main OpenAI model configurable. This new provider stays pinned to openai/gpt-4o-mini, so any CI override immediately splits the inference and safety paths onto different models. Reuse the same env-driven value here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/e2e/configs/run-ci.yaml` around lines 43 - 46, The
lightspeed_question_validity provider is hardcoded to openai/gpt-4o-mini causing
mismatch with the configurable main model; update the provider block
(provider_id: lightspeed_question_validity, provider_type:
inline::lightspeed_question_validity) to read its config.model_id from the same
E2E_OPENAI_MODEL environment-driven value used elsewhere (reuse the existing
variable interpolation/templating approach present for the main OpenAI model) so
inference and safety use the same model when CI overrides are applied.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/e2e/configs/run-ci.yaml`:
- Around line 155-156: The CI config added a second shield (shield_id
"lightspeed_question_validity-shield") but the E2E harness still assumes only
"llama-guard"; update either the test helpers or the config. Fix option A:
modify tests/e2e/features/environment.py's disable-shields logic to unregister
all registered shields (not just "llama-guard") and update
tests/e2e/features/steps/info.py to stop hardcoding assertions for "llama-guard"
and instead assert shields generically or include
"lightspeed_question_validity-shield". OR fix option B: remove the extra shield
entry (shield_id/provider_id) from run-ci.yaml so the harness continues to work
unchanged. Implement one of these: update unregister logic in environment.py and
assertion helpers in info.py (search for functions/fixtures handling
disable-shields and the hardcoded "llama-guard" checks) or revert the config
change.

---

Duplicate comments:
In `@Containerfile`:
- Around line 45-49: The current Containerfile block silently falls back to
downloading when /cachi2 is mounted but the hermetic artifact is missing; update
the conditional around the lightspeed-providers.zip copy so that if
/cachi2/cachi2.env exists and
/cachi2/output/deps/generic/lightspeed-providers.zip does not, the build fails
fast (emit an error and exit non‑zero) instead of performing the curl fallback;
otherwise keep the existing behavior that copies the file when present or
downloads into ${ZIP_PATH} using ${LIGHTSPEED_PROVIDERS_COMMIT}. Ensure the
check references /cachi2/cachi2.env, the source path
/cachi2/output/deps/generic/lightspeed-providers.zip, and uses ${ZIP_PATH} and
${LIGHTSPEED_PROVIDERS_COMMIT} as in the original block.

---

Nitpick comments:
In `@tests/e2e/configs/run-ci.yaml`:
- Around line 43-46: The lightspeed_question_validity provider is hardcoded to
openai/gpt-4o-mini causing mismatch with the configurable main model; update the
provider block (provider_id: lightspeed_question_validity, provider_type:
inline::lightspeed_question_validity) to read its config.model_id from the same
E2E_OPENAI_MODEL environment-driven value used elsewhere (reuse the existing
variable interpolation/templating approach present for the main OpenAI model) so
inference and safety use the same model when CI overrides are applied.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 320f6d97-edbf-4f19-861a-f731e193977e

📥 Commits

Reviewing files that changed from the base of the PR and between aa93131 and bb2052c.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (14)
  • .tekton/lightspeed-stack-pull-request.yaml
  • .tekton/lightspeed-stack-push.yaml
  • Containerfile
  • Makefile
  • artifacts.lock.yaml
  • docker-compose-library.yaml
  • docker-compose.yaml
  • pyproject.toml
  • requirements-build.txt
  • requirements.hashes.source.txt
  • requirements.hashes.wheel.txt
  • run.yaml
  • scripts/generate-artifacts-lock.sh
  • tests/e2e/configs/run-ci.yaml
💤 Files with no reviewable changes (1)
  • requirements.hashes.wheel.txt
✅ Files skipped from review due to trivial changes (7)
  • Makefile
  • docker-compose-library.yaml
  • requirements-build.txt
  • artifacts.lock.yaml
  • scripts/generate-artifacts-lock.sh
  • pyproject.toml
  • requirements.hashes.source.txt
🚧 Files skipped from review as they are similar to previous changes (4)
  • .tekton/lightspeed-stack-pull-request.yaml
  • .tekton/lightspeed-stack-push.yaml
  • docker-compose.yaml
  • run.yaml

@asimurka asimurka force-pushed the include_ls_providers_in_lcs_image branch from bb2052c to f7f4826 Compare March 19, 2026 17:10
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/e2e/configs/run-ci.yaml (1)

43-50: Keep the new provider’s model_id aligned with E2E model config.

Line 46 is hardcoded while the OpenAI provider model is env-driven; this can drift and cause test/config mismatches.

Suggested refactor
-      model_id: openai/gpt-4o-mini
+      model_id: openai/${env.E2E_OPENAI_MODEL:=gpt-4o-mini}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/e2e/configs/run-ci.yaml` around lines 43 - 50, The model_id for the
inline provider (symbols: provider_id: lightspeed_question_validity,
provider_type: inline::lightspeed_question_validity, config.model_id) is
hardcoded to "openai/gpt-4o-mini" and can drift from the env-driven OpenAI model
used elsewhere; change config.model_id to reference the same env-driven variable
or shared config key used by the OpenAI provider (e.g., the OPENAI_MODEL or the
project's E2E model config variable) so both use a single source of truth, and
ensure the CI/test runner supplies that environment/config value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docker-compose.yaml`:
- Line 35: Remove the hard-coded empty assignment for EXTERNAL_PROVIDERS_DIR in
docker-compose.yaml so it doesn't force external_providers_dir to an empty
value; instead either delete the `EXTERNAL_PROVIDERS_DIR=""` line or leave the
variable unset and reference it via the env substitution placeholder (e.g., use
${EXTERNAL_PROVIDERS_DIR} where external_providers_dir is consumed) so
compose-based E2E runs can pick up a provided environment value for
EXTERNAL_PROVIDERS_DIR.

---

Nitpick comments:
In `@tests/e2e/configs/run-ci.yaml`:
- Around line 43-50: The model_id for the inline provider (symbols: provider_id:
lightspeed_question_validity, provider_type:
inline::lightspeed_question_validity, config.model_id) is hardcoded to
"openai/gpt-4o-mini" and can drift from the env-driven OpenAI model used
elsewhere; change config.model_id to reference the same env-driven variable or
shared config key used by the OpenAI provider (e.g., the OPENAI_MODEL or the
project's E2E model config variable) so both use a single source of truth, and
ensure the CI/test runner supplies that environment/config value.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f70bb175-e80f-4514-a65c-c009801a8daa

📥 Commits

Reviewing files that changed from the base of the PR and between bb2052c and f7f4826.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (14)
  • .tekton/lightspeed-stack-pull-request.yaml
  • .tekton/lightspeed-stack-push.yaml
  • Containerfile
  • Makefile
  • artifacts.lock.yaml
  • docker-compose-library.yaml
  • docker-compose.yaml
  • pyproject.toml
  • requirements-build.txt
  • requirements.hashes.source.txt
  • requirements.hashes.wheel.txt
  • run.yaml
  • scripts/generate-artifacts-lock.sh
  • tests/e2e/configs/run-ci.yaml
💤 Files with no reviewable changes (1)
  • requirements.hashes.wheel.txt
✅ Files skipped from review due to trivial changes (7)
  • artifacts.lock.yaml
  • docker-compose-library.yaml
  • requirements-build.txt
  • Containerfile
  • scripts/generate-artifacts-lock.sh
  • pyproject.toml
  • requirements.hashes.source.txt
🚧 Files skipped from review as they are similar to previous changes (4)
  • Makefile
  • run.yaml
  • .tekton/lightspeed-stack-push.yaml
  • .tekton/lightspeed-stack-pull-request.yaml

@asimurka asimurka force-pushed the include_ls_providers_in_lcs_image branch from f7f4826 to ba7d4ab Compare March 19, 2026 17:18
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
run.yaml (1)

45-46: Avoid hard-coding the safety model ID.

Line 45 hard-codes openai/gpt-4o-mini while inference model selection is env-driven (Line 26). This can drift across environments and break safety calls when model policy changes.

♻️ Suggested change
   - provider_id: lightspeed_question_validity
     provider_type: inline::lightspeed_question_validity
     config:
-      model_id: openai/gpt-4o-mini
+      model_id: ${env.LIGHTSPEED_QUESTION_VALIDITY_MODEL_ID:=openai/gpt-4o-mini}
       temperature: 0.0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@run.yaml` around lines 45 - 46, The safety model_id is hard-coded as
"openai/gpt-4o-mini"; change it to derive from the same env-driven inference
model selection used elsewhere (use the existing environment variable or config
key referenced in the repo instead of the literal string) and provide a sensible
fallback default; update the "model_id" entry so it references that env/config
value (e.g., the same variable used for inference model selection) to keep
safety calls aligned across environments.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@run.yaml`:
- Around line 45-46: The safety model_id is hard-coded as "openai/gpt-4o-mini";
change it to derive from the same env-driven inference model selection used
elsewhere (use the existing environment variable or config key referenced in the
repo instead of the literal string) and provide a sensible fallback default;
update the "model_id" entry so it references that env/config value (e.g., the
same variable used for inference model selection) to keep safety calls aligned
across environments.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 38d14201-ede7-4a97-ae0d-19aabc8cabd1

📥 Commits

Reviewing files that changed from the base of the PR and between f7f4826 and ba7d4ab.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (14)
  • .tekton/lightspeed-stack-pull-request.yaml
  • .tekton/lightspeed-stack-push.yaml
  • Containerfile
  • Makefile
  • artifacts.lock.yaml
  • docker-compose-library.yaml
  • docker-compose.yaml
  • pyproject.toml
  • requirements-build.txt
  • requirements.hashes.source.txt
  • requirements.hashes.wheel.txt
  • run.yaml
  • scripts/generate-artifacts-lock.sh
  • tests/e2e/configs/run-ci.yaml
💤 Files with no reviewable changes (1)
  • requirements.hashes.wheel.txt
✅ Files skipped from review due to trivial changes (6)
  • docker-compose.yaml
  • Makefile
  • Containerfile
  • artifacts.lock.yaml
  • requirements-build.txt
  • requirements.hashes.source.txt
🚧 Files skipped from review as they are similar to previous changes (6)
  • .tekton/lightspeed-stack-push.yaml
  • pyproject.toml
  • docker-compose-library.yaml
  • .tekton/lightspeed-stack-pull-request.yaml
  • scripts/generate-artifacts-lock.sh
  • tests/e2e/configs/run-ci.yaml

@asimurka asimurka force-pushed the include_ls_providers_in_lcs_image branch 4 times, most recently from 5555437 to 1847e7c Compare March 19, 2026 17:53
@asimurka asimurka force-pushed the include_ls_providers_in_lcs_image branch from 1847e7c to 3910c6e Compare March 19, 2026 21:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant