Skip to content

Support for Local Model Providers (LM Studio & Ollama)#2385

Closed
jaydennleemc wants to merge 17 commits intoQwenLM:mainfrom
jaydennleemc:feat/local-llm
Closed

Support for Local Model Providers (LM Studio & Ollama)#2385
jaydennleemc wants to merge 17 commits intoQwenLM:mainfrom
jaydennleemc:feat/local-llm

Conversation

@jaydennleemc
Copy link

Add Support for Local Model Providers (LM Studio & Ollama)

TLDR

This PR adds support for connecting to local AI models through LM Studio and Ollama. Both provide OpenAI-compatible APIs, allowing users to run models locally on their machines without needing external API keys.

Dive Deeper

Changes Made

  1. New Authentication Types

    • Added USE_LM_STUDIO and USE_OLLAMA auth types to AuthType enum
    • Both use OpenAI-compatible API protocol
  2. Environment Variable Support

    • LM Studio: LMSTUDIO_API_KEY, LMSTUDIO_BASE_URL, LMSTUDIO_MODEL
    • Ollama: OLLAMA_API_KEY, OLLAMA_BASE_URL, OLLAMA_MODEL
  3. Default Base URLs

    • LM Studio: http://localhost:1234/v1 (default LM Studio port)
    • Ollama: http://localhost:11434/v1 (default Ollama port)
  4. UI Updates

    • Added LM Studio and Ollama options to the authentication dialog
    • Both support two-step configuration: Server URL → API Key (optional)
    • API key is optional for local models

How It Works

Both LM Studio and Ollama provide OpenAI-compatible REST APIs. This implementation leverages the existing OpenAI content generator infrastructure with provider-specific configurations.

User Experience

Users can now:

  1. Select "LM Studio" or "Ollama" from the authentication dialog
  2. Configure the local server URL (defaults provided)
  3. Optionally provide an API key
  4. Use any model available locally

Reviewer Test Plan

  1. Test with LM Studio

    • Install and run LM Studio
    • Load a model (e.g., llama2, qwen)
    • Start the local server (default port 1234)
    • Run qwen --auth-type=lm-studio
    • Select LM Studio from auth dialog
    • Verify connection works with a simple prompt
  2. Test with Ollama

    • Install and run Ollama
    • Pull a model: ollama pull qwen2.5
    • Start Ollama: ollama serve
    • Run qwen --auth-type=ollama
    • Select Ollama from auth dialog
    • Verify connection works with a simple prompt
  3. Test Configuration Persistence

    • Configure once, verify it persists across sessions

Testing Matrix

🍏 macOS 🪟 Windows 🐧 Linux
npm run
npx
Docker N/A N/A N/A
Podman N/A - -
Seatbelt N/A - -

Linked issues / bugs

@jaydennleemc jaydennleemc deleted the feat/local-llm branch March 15, 2026 02:29
@jaydennleemc jaydennleemc restored the feat/local-llm branch March 15, 2026 02:31
@jaydennleemc jaydennleemc reopened this Mar 15, 2026
jaydennleemc and others added 8 commits March 15, 2026 11:40
…local

- Changed user-level config directory from .qwen to .qwen_local to avoid
  conflicts with the official qwen CLI tool
- Project-level .qwen/ directory remains unchanged (skills, agents, commands, etc.)
- Updated all documentation and code references accordingly

User-level configs now stored in ~/.qwen_local/:
- settings.json, .env files
- MCP tokens, OAuth credentials
- Skills, agents, commands (user-level)
- History, checkpoints, temp files

Project-level configs still in .qwen/:
- Project-specific settings, skills, agents
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
- Remove API key validation for local Ollama/LM Studio providers
- Add model selection flow for Ollama with model list from /api/tags endpoint
- Save selected Ollama model to settings.json for persistence
- Fix OpenAI client to use empty key for local providers
- Add USE_OLLAMA to validateAuthMethod to fix 'Invalid auth method' error
- Load saved Ollama/LM Studio settings from config on AuthDialog init
- Fix model selection not being saved by passing selected model directly
- Add LM Studio model selection persistence (was only saving Ollama)
- Add OLLAMA_API_KEY to DEFAULT_ENV_KEYS
Jayden Lee and others added 7 commits March 17, 2026 11:12
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
- Remove API key validation for local Ollama/LM Studio providers
- Add model selection flow for Ollama with model list from /api/tags endpoint
- Save selected Ollama model to settings.json for persistence
- Fix OpenAI client to use empty key for local providers
- Add USE_OLLAMA to validateAuthMethod to fix 'Invalid auth method' error
- Load saved Ollama/LM Studio settings from config on AuthDialog init
- Fix model selection not being saved by passing selected model directly
- Add LM Studio model selection persistence (was only saving Ollama)
- Add OLLAMA_API_KEY to DEFAULT_ENV_KEYS
@tanzhenxin
Copy link
Collaborator

We have full multi-provider & multi-models support following OpenAI Compatible, Anthropic and Gemini protocols. As long as the local hosted inference service uses any of the existing standard protocol, it is relatively easy to setup them up.

Refer to this docs for more info: https://qwenlm.github.io/qwen-code-docs/en/users/configuration/model-providers/#openai-compatible-providers-openai

@tanzhenxin tanzhenxin closed this Mar 18, 2026
@jaydennleemc
Copy link
Author

Ok thanks. i think no all users would like edit on config file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants