Local-first AI meeting copilot for macOS.
Capture meeting audio, generate high-quality transcripts with speaker labels, create structured summaries, and chat with transcript-grounded citations.
Download latest DMG · Permissions setup guide
MinuteWave is built for people who want native macOS meeting notes without a browser-first workflow:
- Local transcription path with FluidAudio (Parakeet v3 + offline diarization).
- Cloud transcription options with Azure OpenAI or OpenAI.
- Transcript-aware AI chat and summary generation.
- Session persistence, export, and optional database encryption.
- Native SwiftUI UI with onboarding, settings, and update checks.
| Area | What you get |
|---|---|
| Audio capture | Microphone only or Microphone + system audio capture modes |
| Transcription | Local FluidAudio, Azure OpenAI Whisper, or OpenAI Whisper |
| Speaker labels | Offline diarization in local FluidAudio mode |
| Summary | Structured markdown summary generation from transcript content |
| Transcript chat | Q&A with lexical retrieval and timestamp citations |
| Export | Markdown, TXT, and JSON export per session |
| Storage | Local SQLite session store + Keychain-backed secrets |
| Security | Optional SQLCipher encryption mode with migration support |
| UX | EN/NL app localization, onboarding wizard, update checker |
| Capability | Local FluidAudio | Azure OpenAI | OpenAI | LM Studio |
|---|---|---|---|---|
| Transcription | Yes | Yes | Yes | No |
| Summary | No | Yes | Yes | Yes |
| Transcript chat | No | Yes | Yes | Yes |
| API key required | No | Yes | Yes | Optional |
flowchart LR
A["Microphone"] --> B["HybridAudioCaptureEngine"]
C["System audio (optional)"] --> B
B --> D["Transcription provider"]
D --> E["SQLite session repository"]
E --> F["Summary provider"]
E --> G["Transcript chat + citations"]
E --> H["Export service (MD/TXT/JSON)"]
subgraph Providers
D1["Local FluidAudio"]
D2["Azure OpenAI"]
D3["OpenAI"]
end
subgraph Summary/Chat
S1["Azure OpenAI"]
S2["OpenAI"]
S3["LM Studio"]
end
D --- D1
D --- D2
D --- D3
F --- S1
F --- S2
F --- S3
G --- S1
G --- S2
G --- S3
- macOS
14+ - Apple Silicon Mac
16 GB RAMminimum (enforced by onboarding checks)- Internet connection for cloud providers and first local model download
- Download from latest release.
- Open
MinuteWave-macOS.dmg. - Drag
MinuteWave.apptoApplications. - Start with right-click ->
Openon first run.
Important
If Gatekeeper blocks launch, use one of these options:
System Settings -> Privacy & Security -> Open Anywayxattr -dr com.apple.quarantine "/Applications/MinuteWave.app"
- Complete onboarding permissions (
Microphone, and optionallyScreen Recordingfor system audio capture). - Choose your transcription provider (
Local (FluidAudio)for local-first workflow, orAzure/OpenAIfor cloud transcription). - (Optional) Configure summary/chat provider (
Azure,OpenAI, orLM Studio). - Create a session, start recording, stop recording.
- Review transcript, generate summary, ask follow-up questions, export results.
- Session data is stored locally in
~/Library/Application Support/MinuteWave. - API keys are stored in macOS Keychain.
- Database encryption can be enabled in settings when SQLCipher runtime is available.
- Encryption migrations (plaintext <-> SQLCipher) are built in.
- Local FluidAudio mode keeps inference on-device after model preparation.
- Xcode (latest stable)
- Swift 6 toolchain
- Homebrew packages:
brew install sqlcipher create-dmgswift build
swift test
swift run MinuteWaveFor reliable macOS permission prompts, run as a real app bundle:
./scripts/build_dev_app_bundle.sh debug
open ".build/AppBundle/MinuteWave.app"./scripts/build_dmg.sh releaseOptional signing:
security find-identity -v -p codesigning
SIGNING_IDENTITY="Apple Development: Your Name (TEAMID)" ./scripts/build_dev_app_bundle.sh release- Release workflow:
.github/workflows/release.yml - Tag format:
vMAJOR.MINOR.PATCH - Release artifacts:
MinuteWave-macOS.dmg,MinuteWave-macOS.dmg.sha256
- Permission issues: see
docs/XcodePermissionsSetup.md - Reset TCC permissions:
./scripts/reset_tcc_permissions.sh- LM Studio model not detected: refresh status in Settings ->
AItab and ensure at least one model is loaded.
MIT License. See LICENSE.