diff --git a/README.md b/README.md index 09acbe0..8f0e25e 100644 --- a/README.md +++ b/README.md @@ -53,17 +53,19 @@ cp .env.example .env > **note:** droidclaw requires [bun](https://bun.sh), not node/npm. it uses bun-specific apis (`Bun.spawnSync`, native `.env` loading) that don't exist in node. -edit `.env` - fastest way to start is with ollama (fully local, no api key): +edit `.env` - fastest way to start is with groq (free tier): + +```bash +LLM_PROVIDER=groq +GROQ_API_KEY=gsk_your_key_here +``` + +or run fully local with [ollama](https://ollama.com) (no api key needed): ```bash -# option a: local with ollama (no api key needed) ollama pull llama3.2 LLM_PROVIDER=ollama OLLAMA_MODEL=llama3.2 - -# option b: cloud with groq (free tier) -LLM_PROVIDER=groq -GROQ_API_KEY=gsk_your_key_here ``` connect your phone (usb debugging on): @@ -206,8 +208,8 @@ name: Send WhatsApp Message | provider | cost | vision | notes | |---|---|---|---| +| groq | free tier | no | fastest to start | | ollama | free (local) | yes* | no api key, runs on your machine | -| groq | free tier | no | fastest cloud option | | openrouter | per token | yes | 200+ models | | openai | per token | yes | gpt-4o | | bedrock | per token | yes | claude on aws | diff --git a/site/index.html b/site/index.html index ed08697..5f713f9 100644 --- a/site/index.html +++ b/site/index.html @@ -793,19 +793,18 @@ cp .env.example .env
edit .env - fastest way is ollama (fully local, no api key):
# local (no api key needed) -ollama pull llama3.2 -LLM_PROVIDER=ollama +edit
+.env- fastest way to start is groq (free tier):LLM_PROVIDER=groq +GROQ_API_KEY=gsk_your_key_here -# or cloud (free tier) -LLM_PROVIDER=groq -GROQ_API_KEY=gsk_your_key_here+# or run fully local with ollama (no api key) +# ollama pull llama3.2 +# LLM_PROVIDER=ollama
| provider | cost | vision | notes |
|---|---|---|---|
| groq | free | no | fastest to start |
| ollama | free (local) | yes* | no api key, runs on your machine |
| groq | free | no | fastest cloud option |
| openrouter | per token | yes | 200+ models |
| openai | per token | yes | gpt-4o |
| bedrock | per token | yes | claude on aws |