Releases: friday-james/let-claude-code
v0.4.2: Multi-AI Consultation Loop
Features
- 🤖 Multi-AI Consultation Loop: Gemini now reviews Claude's work after each iteration and provides feedback for the next run
- 🔄
--loop-until-finishflag: New loop mode that respects completion signals from Gemini or Claude - ✅ Improved completion detection: Loop now properly detects "Goal achieved!" in Claude's responses
Improvements
- Better Gemini prompt format using structured
GOAL_ACHIEVED/CONTINUE/NEXT_FOCUSkeys - Strip markdown formatting for more reliable "Goal achieved!" detection
- Capture Claude's actual output text instead of just git log summary
- Gemini feedback now guides subsequent iterations
Bug Fixes
- Fixed
--loop-until-finishnot detecting task completion - Fixed summary not containing Claude's actual response text
What's Changed
--loop: Runs indefinitely, ignoring completion signals--loop-until-finish: Runs until Gemini/Claude determines task is complete- Gemini reviews each iteration and decides whether to continue or provide specific focus areas
Full changelog: v0.4.1...v0.4.2
v0.4.1
Bug fixes:
- Fix stdin handling to allow user input during Claude execution
- Prevent resource leak by properly closing stdin file descriptor
- Clean dist directory before building in CI
This release fixes issues with auto-accept mode and ensures proper cleanup in the publish workflow.
v0.4.0 - Cost Control with Model Selection
🎉 What's New in v0.4.0
Cost Control & Model Selection
Take control of your AI costs! Choose from cost-effective to premium models.
Breaking Changes
--auto-gemini-answerrenamed to--auto-answer(clearer naming)
New Features
Multiple AI Models:
- OpenAI:
gpt-4o-mini(cheapest),gpt-4o,gpt-5.2(premium reasoning) - Gemini:
gemini-1.5-flash(cheapest),gemini-1.5-pro,gemini-3-pro-preview
Cost-Effective Defaults:
- Auto mode now uses
gpt-4o-miniorgemini-1.5-flash(cheapest options) - Save ~95% on API costs compared to premium models
Model Selection:
# Cost-effective (default)
cook --loop --auto-answer
# Choose specific model
cook --loop --auto-answer --ai-model gpt-4o-mini # $0.15/$0.60 per 1M tokens
cook --loop --auto-answer --ai-model gpt-4o # $2.50/$10 per 1M tokens
cook --loop --auto-answer --ai-model gpt-5.2 # $10/$40 per 1M tokens (max reasoning)
cook --loop --auto-answer --ai-model gemini-1.5-flash # $0.075/$0.30 per 1M tokens (cheapest)
cook --loop --auto-answer --ai-model gemini-1.5-pro # $1.25/$5 per 1M tokensCost Comparison
| Model | Input | Output | Use Case |
|---|---|---|---|
| gemini-1.5-flash | $0.075/1M | $0.30/1M | 💰 Cheapest option |
| gpt-4o-mini | $0.15/1M | $0.60/1M | 💰 Cheapest OpenAI |
| gemini-1.5-pro | $1.25/1M | $5/1M | ⚖️ Balanced |
| gpt-4o | $2.50/1M | $10/1M | ⚖️ Balanced OpenAI |
| gpt-5.2 | $10/1M | $40/1M | 🧠 Max reasoning |
Documentation
- Added comprehensive cost comparison
- Updated all examples with new flag name
- Included pricing information
Full Changelog: v0.3.0...v0.4.0
Release v0.3.0 - GPT-5.2 & Gemini 3 Pro Support
🚀 Major New Features
Multi-AI Support
We've added support for GPT-5.2 (OpenAI's latest flagship model) and upgraded to Gemini 3 Pro Preview (Google's most intelligent model), giving you access to the most advanced AI reasoning available!
🤖 GPT-5.2 Integration
- Model:
gpt-5.2- OpenAI's best general-purpose model - Reasoning:
xhigh(maximum reasoning effort) - Output: 65,536 max tokens
- Best for: Complex reasoning, broad world knowledge, multi-step agentic tasks
🧠 Gemini 3 Pro Preview
- Model:
gemini-3-pro-preview- Google's most intelligent multimodal model - Output: 65,536 max tokens
- Best for: State-of-the-art reasoning and multimodal understanding
- Automatic fallback: Used if GPT-5 fails or no OpenAI key available
⚡ Smart AI Selection
The automator now intelligently selects the best AI model:
- Try GPT-5.2 first (if OPENAI_API_KEY is set)
- Fall back to Gemini 3 Pro (if GEMINI_API_KEY is set)
- Fall back to auto-answer "y" (if no keys available)
📦 Installation
pip install --upgrade let-claude-code🔧 Setup
Add one or both API keys to your .env file:
# Option 1: Use GPT-5.2 (preferred for max reasoning)
OPENAI_API_KEY=sk-...
# Option 2: Use Gemini 3 Pro (fallback)
GEMINI_API_KEY=...
# Optional: Telegram notifications
TG_BOT_TOKEN=...
TG_CHAT_ID=...🚀 Usage
# Run with AI auto-answer (tries GPT-5, then Gemini)
cook --loop -m fix_bugs --auto-gemini-answer -y
# Or use the wrapper script
./cook --loop -m fix_bugs --auto-gemini-answer -y🔑 Get API Keys
🐛 Bug Fixes (from v0.2.2)
- Fixed stdin handling for Claude CLI - prompts are now sent and closed properly
- Resolved loop hang issues when Claude requests input
- Added safety checks for closed stdin in resumed sessions
- Improved error handling for AI API failures
📝 What's Changed
Full Changelog: v0.2.2...v0.3.0
When Claude asks a question during automation, the most advanced AI models will now answer it with maximum reasoning capabilities! 🎉
Release v0.2.2
🐛 Bug Fixes
Loop Hang Issues Resolved
- Fixed loop hang when Claude asks questions: Now auto-answers 'y' when Gemini is not enabled
- Fixed stdin handling: Properly closes stdin when not needed to prevent process hangs
- Added loop delay: Implements 10-second delay when runs complete quickly (< 30s) to prevent rapid failures
Gemini Integration Improvements
- Enhanced error logging: Added detailed HTTP/URL error messages for easier debugging
- Better error handling: Improved fallback behavior when Gemini API fails
UX Improvements
- Added --auto-yes flag: Skip lock file confirmation prompts for automated workflows
- Created 'cook' wrapper script: Bypasses Python module caching issues for reliable execution
- Fixed module loading: Resolved init.py import caching problems
📦 Installation
pip install --upgrade let-claude-code🚀 Usage
Use the new wrapper script to avoid caching issues:
# Basic usage
./cook --loop -m fix_bugs -y
# With Gemini auto-answer
./cook --loop -m fix_bugs --auto-gemini-answer -yOr use the installed command:
cook --loop -m fix_bugs -y🔧 Technical Details
This release resolves a critical issue where the automator would hang indefinitely when Claude requested user input and Gemini integration was not enabled. The fix ensures all input_required messages are handled regardless of the Gemini configuration.
Release v0.2.1
feat: prompt to remove stale lock file
fix: re-add input_required handling for --auto-gemini-answer
Release v0.2.0
fix: remove input_required handling that was causing hangs
The input_required handling was causing the loop to hang when Claude asked questions. Removed to restore original behavior.
Release v0.1.9
fix: support TELEGRAM_BOT_TOKEN env var
feat: load .env file for environment variables
Release v0.1.8
feat: Add --auto-gemini-answer for autonomous Claude operation
- Auto-answer Claude's questions using Gemini API
- Sends TG notifications when Claude asks questions and Gemini answers
- Prompts for GEMINI_API_KEY if not set in env
- Support TELEGRAM_CHAT_ID env var
Release v0.1.7
Fix: Expand ~ in --claude flags