All notable changes to this project are documented here. Dates use the ISO format (YYYY-MM-DD).
- Dynamic model discovery: authoritative
/backend-api/codex/modelscatalog with per-account cache and strict allowlist. - Personality caching: seeds Friendly/Pragmatic defaults from runtime model metadata when available.
- Logging safety: request logs redact
prompt_cache_keywhen request logging is enabled. - Catalog cache hygiene: invalid
codex-models-cache-<hash>.jsonfiles are deleted on read. - Config surface: removed legacy
codexModeflag (no longer supported).
- Refresh configuration, architecture, and troubleshooting to match hard-stop and catalog behavior.
Quarantine + Multi-Account Reliability release: safer storage handling, clearer recovery, and more deterministic account management.
- Account management: toggle/remove now targets identity (accountId/email/plan) to avoid stale index actions.
- Quarantine flow: corrupt files now quarantine under lock and salvage valid entries; legacy-only records are dropped during salvage.
- Legacy handling: hydration attempts run before quarantining missing-identity records.
- Hybrid selection: waits for token bucket availability when hybrid strategy has no tokens.
- Storage safety: quarantine writes and account management now re-read under lock to avoid overwriting concurrent updates.
- Quarantine retention: pruning now only occurs after successful writes; failures preserve existing quarantine files.
- Auth toasts: auth failures include the account label for faster debugging.
- Multi-account guide: clarified repair/quarantine behavior and legacy handling.
Parallel Rotation Resilience release: atomic sync across multiple machine sessions.
- Unauthorized Recovery: main request loop now recovers from
401 Unauthorizedby re-syncing with disk and retrying, matching the official Codex CLI's robustness. - Token Overwrite Race: implemented Timestamp Arbitration in storage merges; the most recently active session (
lastUsed) now always wins, preventing stale sessions from corrupting the authoritative machine state.
- Strict Identity Uniqueness: all account matching now strictly uses the
accountId + email + plancomposite key to ensure global uniqueness across different workspaces and subscriptions.
Reliability & Identity Hardening release: status tracking stability, memory safety, and identity-based tracking.
- Identity-Based Trackers:
HealthScoreTrackerandTokenBucketTrackernow key onaccountId|email|planinstead of array index for stability across account changes. - Periodic Cleanup: both trackers now auto-prune stale entries (24h for health, 1h for tokens) to prevent memory growth.
- Console Logging: migrated
console.errorcalls tologWarnto respect debug settings and avoid TUI corruption.
- logCritical(): new always-enabled logger for critical issues that bypass debug flags.
- Toast Notifications: integrated debounced toast when switching accounts due to rate limits (respects
quietModeconfig). - Debug Logging: added status fetch failure logging when
OPENCODE_OPENAI_AUTH_DEBUG=1.
- Memory Leak:
RateLimitTrackernow cleans up stale entries periodically (every 60s).
Authoritative Status release: active fetching from official OpenAI /wham/usage endpoints and perfect protocol alignment.
- Active Usage Fetching: status tracking now actively fetches real-time rate limit data from
https://chatgpt.com/backend-api/wham/usage(ChatGPT plans) andhttps://api.openai.com/api/codex/usage(API plans). - Protocol Alignment: refactored
CodexStatusManagerto match the officialcodex-rs v0.92.0data structures and TUI formatting. - Detailed Reset Dates: long-term resets (>24h) now display the full date and time (e.g.,
resets 18:10 on 5 Feb).
- Inverted Usage Display: status bars now show "% left" instead of "% used", correctly representing remaining quota.
- Standardized Labels: updated window labels to "5 hour limit:" and "Weekly limit:".
- Proactive Hydration: status fetches now force a token refresh and identity repair to ensure the authoritative
/usageendpoint receives a valid Bearer token. - Enhanced UI Alignment: applied strict padding and "Always Render Both" logic to ensure vertical and horizontal table stability even with missing or "unknown" data.
- Memory Safety: added length guards to the SSE stream buffer to prevent memory exhaustion from malformed backend responses.
- Stale Data Capture: replaced fragile SSE/Header capture fallbacks with reliable direct polling of the official usage API.
Cache Retention release: automatic pruning of stale snapshot data.
- Snapshot Retention: implemented automatic pruning for snapshots older than 7 days to prevent accumulation of stale or contaminated data.
- Diagnostic Logging: added
OPENCODE_OPENAI_AUTH_DEBUG=1support for troubleshooting identity key generation and header parsing.
Global Cache path release: ensure cross-process visibility.
- Global Snapshot Path: corrected
getCachePathto always use the system configuration directory (~/.config/opencode/cache), ensuring that rate limit data captured by the proxy is visible across project scopes. - Table Alignment: refactored account/status table output into a strict ASCII format to prevent horizontal shifting.
Storage Hardening release: restrictive permissions for account files.
- Security Hardening: primary account storage now uses restrictive
0600permissions on creation/update, matching the hardening applied to snapshots.
Final Hardening & Concurrency Safety release: async status manager, promise-based initialization, and cross-process safety.
- Detailed Inline Errors: 429 rate-limit responses now return structured inline account status summaries showing which accounts are exhausted and their reset times.
- Async Status Hardening: refactored
CodexStatusManagerto use non-blocking async I/O (fs.promises) and promise-based initialization gates to prevent concurrency races. - Cross-Process Hydration: ensured status snapshots are stored globally even when using per-project account storage, allowing all projects to share real-time rate limit visibility.
- Status UI Refinement: refactored account/status output into a strictly aligned ASCII table format for better readability.
- Lost Updates Prevention: implemented timestamp-based (
updatedAt) merge arbitration under lock (proper-lockfile) to ensure newest state wins across concurrent processes. - Security Hardening: primary accounts and snapshots cache files now use restrictive
0600permissions.
- Initialization Race: resolved a race condition where concurrent calls to the status manager could result in stale or partial data during the initial disk load.
- Test Fixture Alignment: refactored all unit tests to strictly use repository fixtures for identities and snapshots, removing all hardcoded mocks.
Persistence fix release: cross-process snapshot visibility.
- Persistent Snapshots: rate limit data is now persisted to
~/.config/opencode/cache/codex-snapshots.jsonfor cross-process visibility between the proxy and CLI tools.
- Status Fixtures: added snapshot and header fixtures for deterministic testing of rate limit parsing and rendering.
Per-project storage & Codex status release: isolated account storage and real-time rate limit monitoring.
- Codex Status Tool: added
status-codexandopenai-accountstools to display real-time ASCII status bars for rate limits and credits. - Per-Project Storage: added
perProjectAccountsconfig flag to isolate OpenAI accounts within.opencode/for specific projects.
- Identity Matching: standardized plan casing (e.g., "plus" -> "Plus") at the storage level to ensure reliable matching.
- Concurrent Session Safety: implemented
originalRefreshTokentracking and disk-fallback lazy loading to prevent token drift and rollbacks in multi-process environments.
CI publish fix: use Node.js 24+ (npm 11.5.1+) so npm Trusted Publishing (OIDC) is supported in GitHub Actions.
- Release workflow: publish now uses npm Trusted Publishing (OIDC) instead of legacy tokens.
CI publish attempt: v4.5.8 tag publish failed because the GitHub runner npm CLI did not support Trusted Publishing.
Hardening release: account repair/quarantine UX, safer locking, and better production ergonomics.
- Repair + quarantine UX: detect corrupt storage / legacy identity records and prompt to repair during login; auto-repair once on first send when no eligible accounts.
- Wrap-safe messaging: toast/status formatting helpers to keep TUI output readable.
- Account controls:
openai-accounts-toggletool to enable/disable an account by index.
- Storage locking: lock paths ensure the storage file exists before acquiring
proper-lockfile(antigravity-style). - Migration safety: legacy migration runs under the storage lock to avoid cross-process races.
- Quarantine safety: quarantine copies attempt
0600and older quarantine files may be pruned to avoid unbounded buildup. - Write robustness:
.tmpfiles are cleaned up on save failures. - Manual OAuth security: validate OAuth
statewhen provided; recommend pasting the full redirect URL. - Release pipeline: GitHub Actions publishes to npm via OIDC provenance.
- Disabled account safety: disabled accounts are excluded from refresh/hydration and proactive refresh.
- Multi-account docs: document repair/quarantine behavior, account toggle, and retention notes.
Multi-account parity release: strict identity, account management, and refresh/hydration reliability.
- Account management:
opencode auth loginnow offers manage mode to enable/disable accounts; storage persistsenabled. - Background refresh: proactive refresh queue/scheduler can refresh tokens ahead of expiry (config-flagged).
- Multi-account stability: locking/rotation hardening under load.
- Strict identity matching: accounts match on
accountId+email+plan. - Legacy hydration: refresh-based hydration fills missing email/accountId/plan, throttled and skips disabled accounts.
- Wait-time calculation: hydrates legacy identities before wait-time checks and ignores disabled accounts.
- Rate-limit backoff: exponential backoff replaces linear retry scaling.
- Refresh token safety: lock refresh usage and retry when disk updates occur.
- Active index remap: active indices remap after refresh token dedupe.
- Legacy identity hydration: plan-only records hydrate via refresh tokens; access-token claims used when id token lacks claims.
- Disabled account safety: disabled accounts are excluded from hydration and wait-time calculations.
- Fixtures/JWTs: align account fixtures and JWT payloads; add hydration fallback coverage.
- Config defaults: sync plugin config default tests.
- Multi-account docs: updated manage flow, identity rules, and storage fields.
Release metadata: version bump only (no functional changes).
- Release/tag metadata only.
Bugfix release: avoid plan collision during auth fallback hydration.
- Hydration fallback: avoid plan collisions when hydrating auth fallback.
- Repo hygiene: removed AGENTS doc from repo.
Bugfix + tooling release: migrate plugin paths and protect account saves.
- Release automation: auto-tag release workflow.
- Plugin path migration: migrate plugin paths and protect account saves.
Bugfix release: match accounts by plan and render OAuth version.
- Account matching: include plan to prevent overwrites.
- OAuth success banner: render OAuth version on success page.
Bugfix release: atomic account saves and rate-limit key dedupe.
- Storage: account saves are atomic.
- Rate-limit keys: dedupe per-family/model keys.
Feature release: align login UX and capture plan info.
- Plan capture: store ChatGPT plan from OAuth JWT.
- Login UX: align OpenAI login flow with antigravity UX.
Bugfix release: safer installer plugin removal.
- Installer plugin removal: avoid substring collisions when removing plugin entries.
Bugfix release: installer consistency improvements.
- Installer pinning: keep plugin at
@latestduring install/updates.
- Repo hygiene: ignore local
BUG_FIXESnotes.
Security release: patch JWT middleware vulnerability.
- Security:
honoJWT middleware vulnerability resolved (audit fix).
- Repo hygiene: ignore local third-account test script.
Bugfix release: make TUI login non-interactive; improve account migration reliability.
- CLI vs TUI auth mismatch:
opencode auth loginkeeps the full multi-account workflow (add/fresh + add-another prompts), while TUI-based login no longer overlays terminal prompts on the UI. - TUI login flow: provider selection in the TUI now performs a single login and returns to the provider list (antigravity-style behavior).
- Migration behavior: when both legacy (
~/.opencode/) and new (~/.config/opencode/) account files exist, the plugin merges and deduplicates accounts instead of ignoring the legacy file. - Debug gating: auth/storage debug output stays behind
OPENCODE_OPENAI_AUTH_DEBUG=1.
Compliance release: third-party notices for MIT-derived code.
THIRD_PARTY_NOTICES.mdwith the MIT license text forNoeFabris/opencode-antigravity-auth.
Bugfix release: fixes broken terminal input after OAuth login.
- Restores terminal raw mode/mouse tracking after interactive auth prompts to prevent mouse movements being interpreted as typed input.
Bugfix release: align account/config storage with OpenCode's config directory.
- Store
openai-codex-accounts.jsonandopenai-codex-auth-config.jsonunder~/.config/opencode/. - Automatically migrate legacy files from
~/.opencode/on startup.
--uninstall --allremoves both the new and legacy locations.
Multi-account strategy release: hybrid selection and expanded docs.
- Hybrid selection strategy:
accountSelectionStrategy: "hybrid"(health score + token bucket + LRU bias).
- Multi-account docs: Expanded to include strategy descriptions and manual configuration examples (antigravity-inspired).
Fork maintenance release: publish-ready metadata + installer alignment.
- npm publish compatibility: Fixes
binpaths sonpx -y opencode-openai-codex-multi-auth@latestruns the installer. - Fork docs/installer: Uses the fork package name by default and migrates legacy identifiers.
- OAuth success page: Updates banner to the fork package name and version.
Maintenance release: OAuth success page version sync.
- OAuth success banner: Updates the success page header to display the current release version.
Installer safety release: JSONC support, safe uninstall, and minimal reasoning clamp.
- JSONC-aware installer: preserves comments/formatting and prioritizes
opencode.jsoncoveropencode.json. - Safe uninstall:
--uninstallremoves only plugin entries + our model presets;--allremoves tokens/logs/cache. - Installer tests: coverage for JSONC parsing, precedence, uninstall safety, and artifact cleanup.
- Default config path: installer creates
~/.config/opencode/opencode.jsoncwhen no config exists. - Dependency:
jsonc-parseradded to keep JSONC updates robust and comment-safe.
- Minimal reasoning clamp:
minimalis now normalized tolowfor GPT‑5.x requests to avoid backend rejection.
Feature + reliability release: variants support, one-command installer, and auth/error handling fixes.
- One-command installer/update:
npx -y opencode-openai-codex-auth@latest(global config, backup, cache clear) with--legacyfor OpenCode v1.0.209 and below. - Modern variants config:
config/opencode-modern.jsonfor OpenCode v1.0.210+; legacy presets remain inconfig/opencode-legacy.json. - Installer CLI bundled as package bin for cross-platform use (Windows/macOS/Linux).
- Variants-aware request config: respects host-supplied
body.reasoning/providerOptions.openaibefore falling back to defaults. - OpenCode prompt source: updates to the current upstream repository (
anomalyco/opencode). - Docs/README: install-first layout with leaner guidance and explicit legacy path.
- Headless login fallback: missing
xdg-openno longer fails the OAuth flow; manual URL paste stays available. - Error handling alignment: refresh failures throw; usage-limit 404s map to retryable 429s where appropriate.
- AGENTS.md preservation: protected instruction markers stop accidental filtering of user instructions.
- Tool-call integrity: orphan outputs now match
local_shell_callandcustom_tool_call(Codex CLI parity); unmatched outputs preserved as assistant messages. - Logging noise: debug logging gated behind flags to prevent stdout bleed.
Feature release: GPT 5.2 Codex support and prompt alignment with latest Codex CLI.
- GPT 5.2 Codex model family: Full support for
gpt-5.2-codexwith presets:gpt-5.2-codex-low- Fast GPT 5.2 Codex responsesgpt-5.2-codex-medium- Balanced GPT 5.2 Codex tasksgpt-5.2-codex-high- Complex GPT 5.2 Codex reasoning & toolsgpt-5.2-codex-xhigh- Deep GPT 5.2 Codex long-horizon work
- New model family prompt:
gpt-5.2-codex_prompt.mdfetched from the latest Codex CLI release with its own cache file. - Test coverage: Added unit tests for GPT 5.2 Codex normalization, family selection, and reasoning behavior.
- Prompt selection alignment: GPT 5.2 general now uses
gpt_5_2_prompt.md(Codex CLI parity). - Reasoning configuration: GPT 5.2 Codex supports
xhighbut does not support"none";"none"auto-upgrades to"low"and"minimal"normalizes to"low". - Config presets:
config/opencode-legacy.jsonincludes the 22 pre-configured presets (adds GPT 5.2 Codex);config/opencode-modern.jsonprovides the variant-based setup. - Docs: Updated README/AGENTS/config docs to include GPT 5.2 Codex and new model family behavior.
Minor release: "none" reasoning effort support, orphaned function_call_output fix, and HTML version update.
- "none" reasoning effort support: GPT-5.1 and GPT-5.2 support
reasoning_effort: "none"which disables the reasoning phase entirely. This can result in faster responses when reasoning is not needed.gpt-5.2-none- GPT-5.2 with reasoning disabledgpt-5.1-none- GPT-5.1 with reasoning disabled
- 4 new unit tests for "none" reasoning behavior (now 197 total unit tests).
- Orphaned function_call_output 400 errors: Fixed API errors when conversation history contains
item_referencepointing to stored function calls. Previously, orphanedfunction_call_outputitems were only filtered when!body.tools. Now always handles orphans regardless of tools presence, and converts them to assistant messages to preserve context while avoiding API errors. - OAuth HTML version display: Updated version in oauth-success.html from 1.0.4 to 4.1.0.
getReasoningConfig()now detects GPT-5.1 general purpose models (not Codex variants) and allows "none" to pass through.- GPT-5.2 inherits "none" support as it's newer than GPT-5.1.
- Codex variants (gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini) do NOT support "none":
- Codex and Codex Max: "none" auto-converts to "low"
- Codex Mini: "none" auto-converts to "medium" (as before)
- Documentation updated with complete reasoning effort support matrix per model family.
- OpenAI API docs (
platform.openai.com/docs/api-reference/chat/create): "gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high." - Codex CLI (
codex-rs/protocol/src/openai_models.rs):ReasoningEffortenum includesNonevariant with#[serde(rename_all = "lowercase")]serialization to"none". - Codex CLI (
codex-rs/core/src/client.rs): Request builder passesReasoningEffort::Nonethrough to API without validation/rejection. - Codex CLI (
docs/config.md): Documentsmodel_reasoning_effort = "none"as valid config option.
- This plugin defaults to "medium" for better coding assistance; users must explicitly set "none" if desired.
Feature release: GPT 5.2 model support and image input capabilities.
- GPT 5.2 model family support: Full support for OpenAI's latest GPT 5.2 model with 4 reasoning level presets:
gpt-5.2-low- Fast responses with light reasoninggpt-5.2-medium- Balanced reasoning for general tasksgpt-5.2-high- Complex reasoning and analysisgpt-5.2-xhigh- Deep multi-hour analysis (same as Codex Max)
- Full image input support: All 16 model variants now include
modalities.input: ["text", "image"]enabling full multimodal capabilities - read screenshots, diagrams, UI mockups, and any image directly in OpenCode. - GPT 5.2 model family added to
codex.tswith dedicated prompt handling. - Test coverage: Updated integration tests to verify all 16 models (was 13), now 193 unit tests + 16 integration tests.
- Model ordering: Config now ordered by model family priority: GPT 5.2 → Codex Max → Codex → Codex Mini → GPT 5.1.
- Removed default presets: Removed
gpt-5.1-codex-maxandgpt-5.2(without reasoning suffix) to enforce explicit reasoning level selection. - Test script:
scripts/test-all-models.shnow uses local dist for testing and includes GPT 5.2 tests. - Documentation: Updated README with GPT 5.2 models, image support, and condensed config example.
- GPT 5.2 maps to
gpt-5.2API model with same reasoning options as Codex Max (low/medium/high/xhigh). getModelFamily()now returns"gpt-5.2"for GPT 5.2 models, using Codex Max prompts.getReasoningConfig()treats GPT 5.2 like Codex Max forxhighreasoning support.- Model normalization pattern matching updated to recognize GPT 5.2 before other patterns.
Bugfix release: Fixes compaction context loss, agent creation, and SSE/JSON response handling.
- Compaction losing context: v4.0.1 was too aggressive in filtering tool calls - it removed ALL
function_call/function_call_outputitems when tools weren't present. Now only orphaned outputs (without matching calls) are filtered, preserving matched pairs for compaction context. - Agent creation failing: The
/agent createcommand was failing with "Invalid JSON response" because we were returning SSE streams instead of JSON forgenerateText()requests. - SSE/JSON response handling: Properly detect original request intent -
streamText()requests get SSE passthrough,generateText()requests get SSE→JSON conversion.
gpt-5.1-chat-latestmodel support: Added to model map, normalizes togpt-5.1.
- Root cause of compaction issue: OpenCode sends
item_referencewithfc_*IDs for function calls. We filter these for stateless mode, but v4.0.1 then removed ALL tool items. Now we only remove orphanedfunction_call_outputitems (where no matchingfunction_callexists). - Root cause of agent creation issue: We were forcing
stream: truefor all requests and returning SSE for all responses. Now we capture originalstreamvalue before transformation and convert SSE→JSON only when original request wasn't streaming. - The Codex API always receives
stream: true(required), but response handling is based on original intent.
Bugfix release: Fixes API errors during summary/compaction and GitHub rate limiting.
- Orphaned
function_call_outputerrors: Fixed 400 errors during summary/compaction requests when OpenCode sendsitem_referencepointers to server-stored function calls. The plugin now filters outfunction_callandfunction_call_outputitems when no tools are present in the request. - GitHub API rate limiting: Added fallback mechanism when fetching Codex instructions from GitHub. If the API returns 403 (rate limit), the plugin now falls back to parsing the HTML releases page.
- Root cause: OpenCode's secondary model (gpt-5-nano) uses
item_referencewithfc_*IDs to reference stored function calls. Our plugin filtersitem_referencefor stateless mode (store: false), leavingfunction_call_outputorphaned. The Codex API rejects requests with orphaned outputs. - Fix: When
hasTools === false, filter out allfunction_callandfunction_call_outputitems from the input array. - GitHub fallback chain: API endpoint → HTML page → redirect URL parsing → HTML regex parsing.
Major release: Complete prompt engineering overhaul matching official Codex CLI behavior, with full GPT-5.1 Codex Max support.
- Full Codex Max support with dedicated prompt including frontend design guidelines
- Model-specific prompts matching Codex CLI's prompt selection logic
- GPT-5.0 → GPT-5.1 migration as legacy models are phased out
- Model-specific system prompts: Plugin now fetches the correct Codex prompt based on model family, matching Codex CLI's
model_family.rslogic:gpt-5.1-codex-max*→gpt-5.1-codex-max_prompt.md(117 lines, includes frontend design guidelines)gpt-5.1-codex*,gpt-5.1-codex-mini*→gpt_5_codex_prompt.md(105 lines, focused coding prompt)gpt-5.1*→gpt_5_1_prompt.md(368 lines, full behavioral guidance)
- New
ModelFamilytype ("codex-max" | "codex" | "gpt-5.1") for prompt selection. - New
getModelFamily()function to determine prompt selection based on normalized model name. - Model family now logged in request logs for debugging (
modelFamilyfield in after-transform logs). - 16 new unit tests for model family detection (now 191 total unit tests).
- Integration tests now verify correct model family selection (13 integration tests with family verification).
- Legacy GPT-5.0 models now map to GPT-5.1: All legacy
gpt-5model variants automatically normalize to theirgpt-5.1equivalents as GPT-5.0 is being phased out by OpenAI:gpt-5-codex→gpt-5.1-codexgpt-5→gpt-5.1gpt-5-mini,gpt-5-nano→gpt-5.1codex-mini-latest→gpt-5.1-codex-mini
- Lazy instruction loading: Instructions are now fetched per-request based on model family (not pre-loaded at initialization).
- Separate caching per model family: Each model family has its own cached prompt file:
codex-max-instructions.md+codex-max-instructions-meta.jsoncodex-instructions.md+codex-instructions-meta.jsongpt-5.1-instructions.md+gpt-5.1-instructions-meta.json
- Fixed OpenCode prompt cache URL to fetch from
devbranch instead of non-existentmainbranch. - Fixed model configuration test script to correctly identify model logs in multi-model sessions (opencode uses a small model like
gpt-5-nanofor title generation alongside the user's selected model).
This release brings full parity with Codex CLI's prompt engineering:
- Codex family (105 lines): Concise, tool-focused prompt for coding tasks
- Codex Max family (117 lines): Adds frontend design guidelines for UI work
- GPT-5.1 general (368 lines): Comprehensive behavioral guidance, personality, planning
- GPT 5.1 Codex Max support: normalization, per-model defaults, and new presets (
gpt-5.1-codex-max,gpt-5.1-codex-max-xhigh) with extended reasoning options (includingnone/xhigh) while keeping the 272k context / 128k output limits. - Typing and config support for new reasoning options (
none/xhigh, summaryoff/on) plus updated test matrix entries.
- Codex Mini clamping now downgrades unsupported
xhightohighand guards againstnone/minimalinputs. - Documentation, config guides, and validation scripts now reflect 13 verified GPT 5.1 variants (3 codex, 5 codex-max, 2 codex-mini, 3 general), including Codex Max. See README for details on pre-configured variants.
- GPT 5.1 model family support: normalization for
gpt-5.1,gpt-5.1-codex, andgpt-5.1-codex-miniplus new GPT 5.1-only presets in the canonicalconfig/opencode-legacy.json. - Documentation updates (README, docs, AGENTS) describing the 5.1 families, their reasoning defaults, and how they map to ChatGPT slugs and token limits.
- Model normalization docs and tests now explicitly cover both 5.0 and 5.1 Codex/general families and the two Codex Mini tiers.
- The legacy GPT 5.0 full configuration is now published separately; new installs should prefer the 5.1 presets in
config/opencode-legacy.json.
- Codex Mini support end-to-end: normalization to the
codex-mini-latestslug, proper reasoning defaults, and two new presets (gpt-5-codex-mini-medium/gpt-5-codex-mini-high). - Documentation & configuration updates describing the Codex Mini tier (200k input / 100k output tokens) plus refreshed totals (11 presets, 160+ unit tests).
- Prevented Codex Mini from inheriting the lightweight (
minimal) reasoning profile used bygpt-5-mini/nano, ensuring the API always receives supported effort levels.
- Codex-style usage-limit messaging that mirrors the 5-hour and weekly windows reported by the Codex CLI.
- Documentation guidance noting that OpenCode's context auto-compaction and usage sidebar require the canonical
config/opencode-legacy.json.
- Prompt caching now relies solely on the host-supplied
prompt_cache_key; conversation/session headers are forwarded only when OpenCode provides one. - CODEX_MODE bridge prompt refreshed to the newest Codex CLI release so tool awareness stays in sync.
- Clarified README, docs, and configuration references so the canonical config matches shipped behaviour.
- Pinned
hono(4.10.4) andvite(7.1.12) to resolve upstream security advisories.
- Comprehensive compliance documentation (ToS guidance, security, privacy) and a full user/developer doc set.
- Per-model configuration lookup, stateless multi-turn conversations, case-insensitive model normalization, and GitHub instruction caching.
- README cache-clearing snippet now runs in a subshell from the home directory to avoid path issues while removing cached plugin files.
- Enhanced CODEX_MODE bridge prompt with Task tool and MCP awareness plus ETag-backed verification of OpenCode system prompts.
- Request transformation made async to support prompt verification caching; AGENTS.md renamed to provide cross-agent guidance.
- Full TypeScript rewrite with strict typing, 123 automated tests, and nine pre-configured model variants matching the Codex CLI.
- CODEX_MODE introduced (enabled by default) with a lightweight bridge prompt and configurability via config file or
CODEX_MODEenv var.
- Library reorganized into semantic folders (auth, prompts, request, etc.) and OAuth flow polished with the new success page.
- Major internal refactor splitting the runtime into focused modules (logger, request/response handlers) and removing legacy debug output.
- ETag-based GitHub caching for Codex instructions and release-tag tracking for more stable prompt updates.
- Default model fallback, text verbosity initialization, and standardized error logging prefixes.
- README clarifications: opencode auto-installs plugins, config locations, and streamlined quick-start instructions.
- Initial production release with ChatGPT Plus/Pro OAuth support, tool remapping, auto-updating Codex instructions, and zero runtime dependencies.