Voice command system that processes natural language to create calendar events, manage todos, and more.
See it in action: Day in the Life - Building Voicexec
- Voice-to-text transcription using OpenAI Whisper
- Natural language intent processing with local LLM (LM Studio)
- Google Calendar integration
- Todo list management
- Real-time audio recording in browser
voicexec/
├── backend/ # Express.js API (TypeScript)
├── frontend/ # Next.js web client (React)
└── whisper-service/ # Python microservice for audio transcription
- Node.js 18+
- Python 3.10+
- PostgreSQL
- LM Studio running locally
- Google Cloud Console project (for Calendar API)
cd backend
npm install
cp .env.example .env
# Edit .env with your database and service URLs
npm run db:generate
npm run db:migrate
npm run devcd whisper-service
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
python main.pycd frontend
npm install
npm run devOpen http://localhost:3000 in your browser.
See backend/.env.example for required environment variables:
DATABASE_URL- PostgreSQL connection stringLM_STUDIO_BASE_URL- LM Studio API endpoint (default:http://127.0.0.1:1234/v1)WHISPER_SERVICE_URL- Whisper service endpoint (default:http://localhost:8001)
- Create a project in Google Cloud Console
- Enable the Google Calendar API
- Create OAuth 2.0 credentials (Desktop app)
- Download
credentials.jsonto thebackend/directory - Run the backend and visit
/api/auth/google/loginto authorize
Each service can be run independently. See individual README files:
MIT