Christopher is designed to run entirely offline once the binaries and models are in place. This runbook covers how to start Christopher without any internet connection, what happens when optional external services are unavailable, and how to verify that your installation is ready before going offline.
- Offline Readiness Checklist
- Offline Startup — Step by Step
- Fallback Behaviors
- Pre-Staging for Air-Gapped Machines
- Troubleshooting Without Internet
Run through this checklist on a connected machine first, then verify it again on the target offline host.
-
whisper-clibinary is compiled and present at$WHISPER_BIN(default:~/whisper.cpp/build/bin/whisper-cli) - Whisper model file is downloaded and present at
$WHISPER_MODEL(default:~/whisper.cpp/models/ggml-base.en.bin) -
llama-clibinary is compiled and present at$LLAMA_BIN(default:~/llama.cpp/build/bin/llama-cli) -
llama-serverbinary is compiled and present at$LLAMA_SERVER_BIN(default:~/llama.cpp/build/bin/llama-server) - GGUF model file is downloaded for each profile you plan to use:
- [ ]
llama32-3b→$LLAMA_MODEL_LLAMA32_3B- [ ]qwen25-3b→$LLAMA_MODEL_QWEN25_3B(optional) - [ ]mistral-7b→$LLAMA_MODEL_MISTRAL_7B(optional) -
piperis installed and in$PATH - Piper voice model is present at
$PIPER_MODEL(default:~/piper_models/en_US-libritts-high.onnx) - Piper config is present at
$PIPER_CONFIG(default:~/piper_models/en_US-libritts-high.onnx.json) - Python dependencies are installed:
pip install -r christopher_requirements.txt -
.envis configured with correct local paths (copy from.env.example)
-
ffmpegis installed (sudo apt install ffmpeg) -
alsa-utilsis installed for ALSA audio (sudo apt install alsa-utils)
- Microphone capture works — at least one of:
- [ ]
parecavailable and PulseAudio reachable (WSL2: see WSL2 PulseAudio note), or - [ ]arecordavailable and ALSA microphone present
- MCP server URLs in
.env(FUSIONAL_BI_URL,FUSIONAL_API_URL,FUSIONAL_CONTENT_URL) are not required for core voice/chat operation — only needed if you use Christopher MCP tool integrations - No API keys or cloud services are used by the core pipeline
bash preflight_voice.shAll items must show [OK] before operating offline. [WARN] items that relate
to optional features (e.g., MCP integrations) are acceptable for offline use.
cat .envVerify that all paths resolve to local files, not remote URLs. Key variables:
| Variable | What to check |
|---|---|
LLAMA_SERVER_BIN |
Absolute path to a compiled binary on this machine |
LLAMA_MODEL_* |
Absolute paths to .gguf files already downloaded |
WHISPER_BIN |
Absolute path to compiled whisper-cli |
WHISPER_MODEL |
Absolute path to .bin model file |
PIPER_MODEL |
Absolute path to .onnx voice model |
LLAMA_SERVER_URL |
Should be http://localhost:8080 (local) |
bash preflight_voice.shExpected output when offline-ready:
== Binaries + Models ==
[OK] whisper-cli found: /home/user/whisper.cpp/build/bin/whisper-cli
[OK] Whisper model found: /home/user/whisper.cpp/models/ggml-base.en.bin
[OK] llama-cli found at expected path
[OK] piper found in PATH
[OK] ffmpeg found
== Audio Input ==
[OK] Mic capture works via PulseAudio (parec) ← or ALSA (arecord)
== ASR Smoke Test ==
[OK] Whisper ASR executed successfully
== Audio Output ==
[OK] Playback path works via paplay ← or ffplay
== Summary ==
Pass: 7 | Warn: 0 | Fail: 0
If any
[FAIL]items appear, resolve them before proceeding. See Troubleshooting Without Internet.
If you are not using the llama-server.service systemd unit:
# Load path variables from .env
export $(grep -v '^#' .env | xargs)
# Start the server with your chosen model profile
~/llama.cpp/build/bin/llama-server \
--model "$LLAMA_MODEL_LLAMA32_3B" \
--n-gpu-layers 99 \
--ctx-size 2048 \
--port 8080 &
# Wait for it to become ready
sleep 3 && curl -sf http://localhost:8080/health | grep -q ok && echo "Server ready" || echo "Server not yet ready"If using systemd:
sudo systemctl start llama-server
sudo systemctl status llama-serverVoice mode (full pipeline — microphone + ASR + LLM + TTS):
python3 christopher.py --voiceText / chat mode (no microphone required — safe fallback for headless machines):
python3 christopher.py --chatOverride the model profile for this run:
python3 christopher.py --voice --model-profile qwen25-3bThe following table describes what Christopher does when each external dependency is unavailable. Every fallback listed here is observable and can be tested manually.
| Scenario | What Christopher does | How to test |
|---|---|---|
| PulseAudio unreachable (WSL2) | Automatically falls back to ALSA arecord |
Stop PulseAudio on Windows; run bash preflight_voice.sh and confirm [OK] Mic capture works via ALSA |
parec not installed |
Skips PulseAudio and uses arecord directly |
sudo apt remove pulseaudio-utils; run preflight |
Both parec and arecord unavailable |
Reports [FAIL] No working microphone capture backend found; voice mode cannot proceed |
Use --chat mode as the fallback |
| No microphone hardware | Voice mode fails at recording step | Use python3 christopher.py --chat instead |
| Scenario | What Christopher does | How to test |
|---|---|---|
paplay unavailable |
Falls back to ffplay for audio output |
which paplay returns nothing; run preflight |
Both paplay and ffplay unavailable |
Reports [WARN] No playback tool found; TTS audio cannot be played |
Response text is still printed to terminal |
| Scenario | What Christopher does | How to test |
|---|---|---|
llama-server not running |
christopher.py prints a connection error to stderr and exits |
Stop the server; run python3 christopher.py --chat and observe the error |
Wrong LLAMA_SERVER_URL in .env |
Same connection error | Set URL to an unused port; observe error |
| Model file missing or wrong path | llama-server fails to start; Christopher reports a server error |
Point LLAMA_MODEL_* to a non-existent path |
Recovery: Ensure
llama-serveris started (Step 3 above) before launching Christopher. The server must be running for both--voiceand--chatmodes.
| Scenario | What Christopher does | How to test |
|---|---|---|
| FusionAL MCP servers unreachable | MCP tool calls fail with a connection error; core voice/chat pipeline is unaffected | Set FUSIONAL_BI_URL to a dead port; run a voice turn that does not invoke tools |
FUSIONAL_API_KEY not set or wrong |
MCP requests return 401/403; core pipeline is unaffected | Use a wrong key value; observe log output |
The core pipeline (ASR → LLM → TTS) has no dependency on FusionAL MCP servers. You can operate fully offline without setting these values.
| Scenario | What Christopher does | How to test |
|---|---|---|
ffmpeg missing, sox present |
Uses sox for raw-to-wav conversion |
Remove ffmpeg; install sox; run preflight ASR smoke test |
Both ffmpeg and sox missing |
Skips wav conversion; Whisper may receive a raw PCM file instead of a wav | Remove both; run preflight and observe [WARN] |
If the target machine has no internet access at all, download all assets on a connected machine first and transfer them.
whisper.cpp/ ← compiled source tree
build/bin/whisper-cli
models/ggml-base.en.bin
llama.cpp/ ← compiled source tree
build/bin/llama-cli
build/bin/llama-server
models/ ← GGUF model files
Llama-3.2-3B-Instruct-Q4_K_M.gguf
Qwen2.5-3B-Instruct-Q4_K_M.gguf ← optional
mistral-7b-instruct-v0.2.Q4_K_M.gguf ← optional
piper_models/
en_US-libritts-high.onnx
en_US-libritts-high.onnx.json
Christopher-AI/ ← this repo
.env ← configured with local paths
christopher_requirements.txt
(+ all other repo files)
Python packages (offline wheel cache):
# christopher_requirements.txt covers: requests, python-dotenv, fastapi, uvicorn
# piper-tts is installed separately (not in christopher_requirements.txt)
pip download -r christopher_requirements.txt -d /tmp/pip-cache
pip download piper-tts -d /tmp/pip-cache
# On the air-gapped machine:
# Install orchestrator dependencies (requests, python-dotenv, fastapi, uvicorn)
pip install --no-index --find-links /path/to/pip-cache -r christopher_requirements.txt
# Install piper-tts separately (it is not included in christopher_requirements.txt)
pip install --no-index --find-links /path/to/pip-cache piper-tts
# Place binaries and models at the paths your .env references, then verify:
bash preflight_voice.shNote: Compiled binaries must match the CPU architecture and OS of the target machine. Recompile from source on the target if the architectures differ.
In WSL2, PulseAudio runs on the Windows host — it is not an internet service and is fully available offline. The only requirement is that PulseAudio is started on Windows before launching Christopher in WSL2:
- Start PulseAudio on Windows using your installation's start script.
Example (adjust path to match your installation):
C:\PulseAudio\bin\pulseaudio.exeor the bundledstart-pulseaudio.cmd - In WSL2, export the server address (changes on each reboot):
export PULSE_SERVER=tcp:$(awk '/^nameserver / {print $2; exit}' /etc/resolv.conf):4713[FAIL] whisper-cli missing or not executable: /home/user/whisper.cpp/build/bin/whisper-cli
Fix (online): Run bash pilot_install.sh to rebuild and download.
Fix (offline): Transfer the compiled binary from another machine with the same architecture, or rebuild from source using locally cached source tarballs.
[WARN] piper not found in PATH
Fix (online): pip install piper-tts
Fix (offline): pip install --no-index --find-links /path/to/pip-cache piper-tts
Text/chat mode still works without piper. Voice mode will not produce speech output.
Check the server log directly:
~/llama.cpp/build/bin/llama-server \
--model "$LLAMA_MODEL_LLAMA32_3B" \
--n-gpu-layers 99 \
--ctx-size 2048 \
--port 8080Common causes:
| Symptom | Fix |
|---|---|
model file not found |
Verify LLAMA_MODEL_* path in .env |
CUDA error: no kernel image available |
Recompile llama.cpp with matching CUDA version: cmake .. -DGGML_CUDA=ON |
out of memory |
Reduce --n-gpu-layers or --ctx-size |
address already in use |
Another process owns port 8080 — lsof -i :8080 and stop it |
[FAIL] No working microphone capture backend found
Immediate fallback: Use text mode — no microphone required:
python3 christopher.py --chatWSL2 fix: Start PulseAudio on Windows and export PULSE_SERVER (see above).
Bare-metal / VM fix: Confirm ALSA sees the microphone:
arecord -l # list capture devices
arecord -d 3 /tmp/test.wav && aplay /tmp/test.wav- pilot-setup-guide.md — full install walkthrough and host requirements
- preflight_voice.sh — automated pipeline health check
- .env.example — all configuration variables with documentation
- README.md — project overview, GPU tuning, model profiles