Hey u/noir_dreams!
You asked about Ollama support for local email classification and getting past API limits - great news, I just released a test version with exactly that!
Ollama + Locally Sorted AI? ✅ Done!
Get past API limits/tickets? ✅ Completely unlimited - runs locally
Model flexibility (gemma, gpt-oss-20b, etc.)? ✅ Any Ollama model works!
Based on testing, these work great:
- tinyllama (~1GB) - Super fast, good for quick sorting
- phi (~2.7GB) - Better accuracy, still reasonably fast
- gemma (~2.5GB) - Solid balance of quality and speed
- llama3.2 (~5GB) - High quality, best accuracy
- qwen (~4GB) - Another solid option
All run locally on your machine with zero rate limits. Classify unlimited emails!
- Install Ollama: https://ollama.com/download
- Pull your model:
ollama pull gemma - Download test XPI: AutoSort+ v1.2.3.1-ollama-test
- Open Thunderbird → Drag XPI into Add-ons page
- Settings → Provider: Ollama → Model: gemma
- Click "Test Connection"
- Done! Right-click emails → "Analyze with AI"
- 🏠 100% Local - No data leaves your computer
- 🆓 No API Keys or Limits - Classify as many emails as you want
- 🔒 Privacy First - Your emails stay yours
- 💪 Your Choice - Use any model: gemma, phi, tinyllama, llama3.2, qwen, etc.
Check Ollama is running:
curl http://localhost:11434/api/tagsShould return your installed models
Enable debug mode:
- Ctrl+Shift+J in Thunderbird
- Look for
[Ollama]messages during analysis - Post console errors in GitHub issues
Common fixes:
- Make sure Ollama daemon is running
- Pull the model first:
ollama list - Check full debugging guide in release notes
Specifically looking for:
- What model works best for your email?
- Performance on your system?
- Any bugs or errors?
Download: v1.2.3.1-ollama-test on GitHub
Full Guide: See release notes for detailed setup and debugging
Models Tested: tinyllama, phi, gemma, llama3.2, qwen
Looking forward to hearing your results! 🚀