A local-first, fully offline AI pantry manager that runs on your Mac from the command line.
No cloud. No subscriptions. No data leaving your machine.
Built with Python, SQLite, and a local LLM via Ollama.
You manage your pantry through plain CLI commands or by talking to an AI agent in plain English. The agent decides which action to take and calls structured tools to read or modify your pantry — it never writes to the database directly.
You: "what's expiring this week?"
Agent → calls list_expiring(days=7) → reads SQLite → replies in plain English
Everything runs locally. Your pantry data lives in data/pantry.db on your machine.
- macOS (Apple Silicon or Intel)
- Homebrew
- Internet connection only for the initial model download (~4 GB, one time)
brew install python@3.11 ollamabrew services start ollamaollama pull mistral
mistral(7B, ~4.4 GB) is the default. Any Ollama-compatible model works. For a lighter option tryphi3(~2.3 GB).
git clone https://github.com/your-username/local-grocery-agent.git
cd local-grocery-agent
python3.11 -m venv .venv
source .venv/bin/activate
pip install -e .pantry --helpAdd a new item or update an existing one.
pantry add <name> <quantity> [--expiry YYYY-MM-DD]pantry add eggs "12"
pantry add milk "2L"
pantry add butter "250g" --expiry 2026-04-01nameandquantityare required.--expiry/-eis optional. Use ISO date format:YYYY-MM-DD.- Adding an item that already exists updates it (upsert).
Display all items currently in your pantry.
pantry list Pantry
┏━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Item ┃ Quantity ┃ Expires ┃
┡━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━┩
│ butter │ 250g │ 2026-04-01 │
│ eggs │ 12 │ — │
│ milk │ 2L │ — │
└──────────┴──────────┴────────────┘
Show items expiring within N days (default: 7).
pantry expiring
pantry expiring --days 14
pantry expiring -d 3Items are sorted by expiry date. Only items with an expiry date set are shown.
Remove an item from the pantry by name.
pantry remove <name>pantry remove butterName matching is case-insensitive (Milk and milk are the same item).
Ask the agent to suggest recipes based on what's currently in your pantry. Requires Ollama to be running.
pantry suggestStart a conversational session with the AI agent.
pantry chat
pantry chat --model phi3
pantry chat -m llama3The agent has two modes internally:
Fast path — explicit commands are matched by pattern and executed directly against the tool functions without an LLM round-trip. Instant response, no model needed.
| What you type | What happens |
|---|---|
list / show all |
calls list_items() directly |
expiring / expiring 3 / expiring in 14 days |
calls list_expiring(N) directly |
remove <name> / delete <name> |
calls delete_item(name) directly |
add eggs 12 / add milk 2L 2026-04-01 |
calls add_item(...) directly |
LLM path — natural language and anything not matched by the fast path goes to Ollama. The model decides which tool to call (or answers in plain text).
| What you type | What happens |
|---|---|
add tomato sauce 2 cans |
LLM parses the multi-word name and quantity |
what's about to expire? |
LLM calls list_expiring and summarizes |
suggest something I can cook |
LLM reads pantry and generates recipes |
| Any freeform question | LLM answers conversationally |
You: list
Agent: Pantry items:
- eggs: 12 (exp: 2026-04-01)
- milk: 2L
You: add tomato sauce 2 cans expiring June 2026
Agent: I've added 2 cans of tomato sauce with an expiry date of June 1, 2026.
You: suggest something I can cook
Agent: With eggs, milk, and tomato sauce you could make shakshuka or pasta...
Type exit or press Ctrl-C to quit.
local-grocery-agent/
│
├── src/
│ ├── agent/
│ │ └── agent.py # Agent loop — parses LLM output, dispatches tools
│ │
│ ├── cli/
│ │ └── main.py # CLI commands (Typer + Rich)
│ │
│ ├── db/
│ │ └── database.py # SQLite layer — schema, CRUD, all raw queries
│ │
│ ├── llm/
│ │ └── ollama_client.py # HTTP wrapper for the Ollama API
│ │
│ └── tools/
│ └── pantry_tools.py # Tool functions + registry + JSON schemas
│
├── data/
│ └── pantry.db # SQLite database (created on first run, gitignored)
│
├── tests/
│ └── test_database.py # DB layer smoke tests
│
├── pyproject.toml # Project metadata and dependencies
├── LICENSE
└── README.md
| Layer | File | Does |
|---|---|---|
| CLI | cli/main.py |
Parses flags, formats tables, runs the chat loop |
| Agent | agent/agent.py |
Sends messages to LLM, detects tool calls, dispatches |
| Tools | tools/pantry_tools.py |
Implements each tool, formats results for the agent |
| Database | db/database.py |
All SQL — no logic above raw CRUD |
| LLM | llm/ollama_client.py |
HTTP to Ollama, nothing else |
The LLM never touches the database directly. It calls tools via a JSON protocol; the agent layer executes them.
Any model available in Ollama works. Pull it first, then pass it to chat:
ollama pull phi3
pantry chat --model phi3To change the default, edit DEFAULT_MODEL in src/llm/ollama_client.py.
source .venv/bin/activate
pip install pytest
pytest tests/Contributions are welcome.
- Fork the repository
- Create a feature branch:
git checkout -b feat/your-feature - Commit your changes
- Open a pull request
Please keep pull requests focused. One feature or fix per PR.
-
pantry suggest— dedicated recipe suggestion command - Bulk import from a text or CSV file
- Voice input via Whisper.cpp
- iOS companion app reading the same SQLite database
MIT — see LICENSE.
Free to use, modify, and distribute. Attribution appreciated but not required.