Support for Local Model Providers (LM Studio & Ollama)#2385
Closed
jaydennleemc wants to merge 17 commits intoQwenLM:mainfrom
Closed
Support for Local Model Providers (LM Studio & Ollama)#2385jaydennleemc wants to merge 17 commits intoQwenLM:mainfrom
jaydennleemc wants to merge 17 commits intoQwenLM:mainfrom
Conversation
…local - Changed user-level config directory from .qwen to .qwen_local to avoid conflicts with the official qwen CLI tool - Project-level .qwen/ directory remains unchanged (skills, agents, commands, etc.) - Updated all documentation and code references accordingly User-level configs now stored in ~/.qwen_local/: - settings.json, .env files - MCP tokens, OAuth credentials - Skills, agents, commands (user-level) - History, checkpoints, temp files Project-level configs still in .qwen/: - Project-specific settings, skills, agents
Support Local LLM Provider
support Ollama provider
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode) Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
- Remove API key validation for local Ollama/LM Studio providers - Add model selection flow for Ollama with model list from /api/tags endpoint - Save selected Ollama model to settings.json for persistence - Fix OpenAI client to use empty key for local providers
- Add USE_OLLAMA to validateAuthMethod to fix 'Invalid auth method' error - Load saved Ollama/LM Studio settings from config on AuthDialog init - Fix model selection not being saved by passing selected model directly - Add LM Studio model selection persistence (was only saving Ollama) - Add OLLAMA_API_KEY to DEFAULT_ENV_KEYS
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode) Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
- Remove API key validation for local Ollama/LM Studio providers - Add model selection flow for Ollama with model list from /api/tags endpoint - Save selected Ollama model to settings.json for persistence - Fix OpenAI client to use empty key for local providers
- Add USE_OLLAMA to validateAuthMethod to fix 'Invalid auth method' error - Load saved Ollama/LM Studio settings from config on AuthDialog init - Fix model selection not being saved by passing selected model directly - Add LM Studio model selection persistence (was only saving Ollama) - Add OLLAMA_API_KEY to DEFAULT_ENV_KEYS
…-code into feat/local-llm
Collaborator
|
We have full multi-provider & multi-models support following OpenAI Compatible, Anthropic and Gemini protocols. As long as the local hosted inference service uses any of the existing standard protocol, it is relatively easy to setup them up. Refer to this docs for more info: https://qwenlm.github.io/qwen-code-docs/en/users/configuration/model-providers/#openai-compatible-providers-openai |
Author
|
Ok thanks. i think no all users would like edit on config file. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Add Support for Local Model Providers (LM Studio & Ollama)
TLDR
This PR adds support for connecting to local AI models through LM Studio and Ollama. Both provide OpenAI-compatible APIs, allowing users to run models locally on their machines without needing external API keys.
Dive Deeper
Changes Made
New Authentication Types
USE_LM_STUDIOandUSE_OLLAMAauth types toAuthTypeenumEnvironment Variable Support
LMSTUDIO_API_KEY,LMSTUDIO_BASE_URL,LMSTUDIO_MODELOLLAMA_API_KEY,OLLAMA_BASE_URL,OLLAMA_MODELDefault Base URLs
http://localhost:1234/v1(default LM Studio port)http://localhost:11434/v1(default Ollama port)UI Updates
How It Works
Both LM Studio and Ollama provide OpenAI-compatible REST APIs. This implementation leverages the existing OpenAI content generator infrastructure with provider-specific configurations.
User Experience
Users can now:
Reviewer Test Plan
Test with LM Studio
qwen --auth-type=lm-studioTest with Ollama
ollama pull qwen2.5ollama serveqwen --auth-type=ollamaTest Configuration Persistence
Testing Matrix
Linked issues / bugs