Skip to content

Commit 7a0da43

Browse files
committed
docs: include Ollama in Supported AI Models table and add note about local models and CPU-only mode
1 parent 43c6574 commit 7a0da43

1 file changed

Lines changed: 3 additions & 0 deletions

File tree

README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -416,6 +416,9 @@ You can also manually label emails without AI analysis:
416416
| Claude | claude-3-haiku | 200K tokens | ⚡⚡⚡ | Free/Paid |
417417
| Groq | llama-3.3-70b | 8K tokens | ⚡⚡⚡⚡ | Free |
418418
| Mistral | mistral-small-latest | 32K tokens | ⚡⚡⚡ | Free/Paid |
419+
| Ollama (Local) | llama3.2, phi, gemma, tinyllama (varies) | Varies by model (local) | ⚡ - ⚡⚡⚡ | Local (no external cost) |
420+
421+
> **Note:** Ollama runs locally on your machine — no API key required. Models are downloaded via `ollama pull <model>` and can run in CPU-only mode for machines without GPU resources.
419422
420423
---
421424

0 commit comments

Comments
 (0)