Make groq the default provider, ollama as alternative

Groq (free tier, API key) is the primary setup path.
Ollama (local, no key) shown as secondary option.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Sanju Sivalingam
2026-02-16 13:48:47 +05:30
parent cbd6eb6b1e
commit 158fe68bf9
3 changed files with 20 additions and 19 deletions

View File

@@ -53,17 +53,19 @@ cp .env.example .env
> **note:** droidclaw requires [bun](https://bun.sh), not node/npm. it uses bun-specific apis (`Bun.spawnSync`, native `.env` loading) that don't exist in node.
edit `.env` - fastest way to start is with ollama (fully local, no api key):
edit `.env` - fastest way to start is with groq (free tier):
```bash
LLM_PROVIDER=groq
GROQ_API_KEY=gsk_your_key_here
```
or run fully local with [ollama](https://ollama.com) (no api key needed):
```bash
# option a: local with ollama (no api key needed)
ollama pull llama3.2
LLM_PROVIDER=ollama
OLLAMA_MODEL=llama3.2
# option b: cloud with groq (free tier)
LLM_PROVIDER=groq
GROQ_API_KEY=gsk_your_key_here
```
connect your phone (usb debugging on):
@@ -206,8 +208,8 @@ name: Send WhatsApp Message
| provider | cost | vision | notes |
|---|---|---|---|
| groq | free tier | no | fastest to start |
| ollama | free (local) | yes* | no api key, runs on your machine |
| groq | free tier | no | fastest cloud option |
| openrouter | per token | yes | 200+ models |
| openai | per token | yes | gpt-4o |
| bedrock | per token | yes | claude on aws |

View File

@@ -793,19 +793,18 @@ cp .env.example .env</pre>
<div class="stepper-step">
<span class="stepper-num">2</span>
<h3>configure an llm provider</h3>
<p>edit <code>.env</code> - fastest way is ollama (fully local, no api key):</p>
<pre># local (no api key needed)
ollama pull llama3.2
LLM_PROVIDER=ollama
<p>edit <code>.env</code> - fastest way to start is groq (free tier):</p>
<pre>LLM_PROVIDER=groq
GROQ_API_KEY=gsk_your_key_here
# or cloud (free tier)
LLM_PROVIDER=groq
GROQ_API_KEY=gsk_your_key_here</pre>
# or run fully local with ollama (no api key)
# ollama pull llama3.2
# LLM_PROVIDER=ollama</pre>
<table>
<thead><tr><th>provider</th><th>cost</th><th>vision</th><th>notes</th></tr></thead>
<tbody>
<tr><td>groq</td><td>free</td><td>no</td><td>fastest to start</td></tr>
<tr><td>ollama</td><td>free (local)</td><td>yes*</td><td>no api key, runs on your machine</td></tr>
<tr><td>groq</td><td>free</td><td>no</td><td>fastest cloud option</td></tr>
<tr><td>openrouter</td><td>per token</td><td>yes</td><td>200+ models</td></tr>
<tr><td>openai</td><td>per token</td><td>yes</td><td>gpt-4o</td></tr>
<tr><td>bedrock</td><td>per token</td><td>yes</td><td>claude on aws</td></tr>

View File

@@ -168,12 +168,12 @@ fi
printf "next steps:\n\n"
printf " ${BOLD}1.${RESET} configure an llm provider in ${CYAN}droidclaw/.env${RESET}\n\n"
printf " ${DIM}# local with ollama (no api key needed)${RESET}\n"
printf " ollama pull llama3.2\n"
printf " ${DIM}# set in .env:${RESET} LLM_PROVIDER=ollama\n\n"
printf " ${DIM}# or cloud with groq (free tier)${RESET}\n"
printf " ${DIM}# groq (free tier, fastest to start)${RESET}\n"
printf " ${DIM}# set in .env:${RESET} LLM_PROVIDER=groq\n"
printf " ${DIM}# ${RESET} GROQ_API_KEY=gsk_...\n\n"
printf " ${DIM}# or local with ollama (no api key needed)${RESET}\n"
printf " ollama pull llama3.2\n"
printf " ${DIM}# set in .env:${RESET} LLM_PROVIDER=ollama\n\n"
printf " ${BOLD}2.${RESET} connect your android phone (usb debugging on)\n\n"
printf " adb devices\n\n"
printf " ${BOLD}3.${RESET} run it\n\n"