Add Ollama provider for local LLM support
Reuses OpenAIProvider via Ollama's OpenAI-compatible API at localhost:11434. No API key needed - just install Ollama, pull a model, and set LLM_PROVIDER=ollama. Vision models (llava, llama3.2-vision) supported for screenshot fallback. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -772,13 +772,19 @@ cp .env.example .env</pre>
|
||||
<div class="stepper-step">
|
||||
<span class="stepper-num">3</span>
|
||||
<h3>configure an llm provider</h3>
|
||||
<p>edit <code>.env</code> - fastest way to start is groq (free tier):</p>
|
||||
<pre>LLM_PROVIDER=groq
|
||||
<p>edit <code>.env</code> - fastest way is ollama (fully local, no api key):</p>
|
||||
<pre># local (no api key needed)
|
||||
ollama pull llama3.2
|
||||
LLM_PROVIDER=ollama
|
||||
|
||||
# or cloud (free tier)
|
||||
LLM_PROVIDER=groq
|
||||
GROQ_API_KEY=gsk_your_key_here</pre>
|
||||
<table>
|
||||
<thead><tr><th>provider</th><th>cost</th><th>vision</th><th>notes</th></tr></thead>
|
||||
<tbody>
|
||||
<tr><td>groq</td><td>free</td><td>no</td><td>fastest to start</td></tr>
|
||||
<tr><td>ollama</td><td>free (local)</td><td>yes*</td><td>no api key, runs on your machine</td></tr>
|
||||
<tr><td>groq</td><td>free</td><td>no</td><td>fastest cloud option</td></tr>
|
||||
<tr><td>openrouter</td><td>per token</td><td>yes</td><td>200+ models</td></tr>
|
||||
<tr><td>openai</td><td>per token</td><td>yes</td><td>gpt-4o</td></tr>
|
||||
<tr><td>bedrock</td><td>per token</td><td>yes</td><td>claude on aws</td></tr>
|
||||
@@ -947,7 +953,7 @@ actions.ts 22 actions + adb retry
|
||||
skills.ts 6 multi-step skills
|
||||
workflow.ts workflow orchestration
|
||||
flow.ts yaml flow runner
|
||||
llm-providers.ts 4 providers + system prompt
|
||||
llm-providers.ts 5 providers + system prompt
|
||||
sanitizer.ts accessibility xml parser
|
||||
config.ts env config
|
||||
constants.ts keycodes, coordinates
|
||||
|
||||
Reference in New Issue
Block a user