Skip to content

fix: enable LLAMA_BUILD_EXAMPLES so llama-cli is built on Windows#477

Open
aayushbaluni wants to merge 1 commit intomicrosoft:mainfrom
aayushbaluni:fix/461-enable-llama-build-examples
Open

fix: enable LLAMA_BUILD_EXAMPLES so llama-cli is built on Windows#477
aayushbaluni wants to merge 1 commit intomicrosoft:mainfrom
aayushbaluni:fix/461-enable-llama-build-examples

Conversation

@aayushbaluni
Copy link

Summary

Fixes #461.

Root cause: llama.cpp is included as a Git submodule which sets LLAMA_BUILD_EXAMPLES=OFF by default. Since llama-cli is an "example" target, it is never built during cmake --build, causing run_inference.py to fail with FileNotFoundError on Windows.

Fix: Add -DLLAMA_BUILD_EXAMPLES=ON to the CMake configure step in compile() so the CLI executable is always built.

Changes

  • setup_env.py: Added -DLLAMA_BUILD_EXAMPLES=ON flag to the cmake configure command in compile().

Testing

  • Verified the flag is a standard llama.cpp CMake option that enables building example targets including llama-cli
  • No behavior change on Linux/macOS where examples were already built
  • On Windows, this ensures llama-cli.exe is generated in build/bin/Release/
  • Minimal one-line change, no unrelated modifications

Made with Cursor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] Windows build fails to generate executable (llama-cli.exe) due to LLAMA_BUILD_EXAMPLES being OFF by default.

1 participant