Add Ollama provider for local LLM support
Reuses OpenAIProvider via Ollama's OpenAI-compatible API at localhost:11434. No API key needed - just install Ollama, pull a model, and set LLM_PROVIDER=ollama. Vision models (llava, llama3.2-vision) supported for screenshot fallback. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
17
.env.example
17
.env.example
@@ -39,7 +39,7 @@ MAX_HISTORY_STEPS=10 # How many past steps to keep in conversation context
|
||||
STREAMING_ENABLED=true # Stream LLM responses (shows progress dots)
|
||||
|
||||
# ===========================================
|
||||
# LLM Provider: "groq", "openai", "bedrock", or "openrouter"
|
||||
# LLM Provider: "groq", "openai", "bedrock", "openrouter", or "ollama"
|
||||
# ===========================================
|
||||
LLM_PROVIDER=groq
|
||||
|
||||
@@ -84,3 +84,18 @@ OPENROUTER_MODEL=anthropic/claude-3.5-sonnet
|
||||
# meta-llama/llama-3.3-70b-instruct (open source)
|
||||
# mistralai/mistral-large-latest (European)
|
||||
# deepseek/deepseek-chat (cost efficient)
|
||||
|
||||
# ===========================================
|
||||
# Ollama Configuration (local LLMs, no API key needed)
|
||||
# Install: https://ollama.com then: ollama pull llama3.2
|
||||
# ===========================================
|
||||
OLLAMA_BASE_URL=http://localhost:11434/v1
|
||||
OLLAMA_MODEL=llama3.2
|
||||
# Vision models (for screenshot support):
|
||||
# llava (7B, good vision)
|
||||
# llama3.2-vision (11B, best open-source vision)
|
||||
# Text-only models:
|
||||
# llama3.2 (3B, fast)
|
||||
# llama3.1 (8B, balanced)
|
||||
# qwen2.5 (7B, strong reasoning)
|
||||
# mistral (7B, fast)
|
||||
|
||||
Reference in New Issue
Block a user