This repository provides an advanced agent configuration for OpenCode, designed to create a powerful and reliable software engineering assistant.
- Global Installation
- Verify Installation
- Basic Workflow
- Core Philosophy
- Components
- Directory Structure
This configuration is intended to be installed globally by cloning it directly into your OpenCode configuration directory. This makes the tools and skills available across all your projects.
- OpenCode CLI installed and available in your PATH.
IMPORTANT: This will prevent you from overwriting any custom setups you may have.
mv ~/.config/opencode ~/.config/opencode.bakClone this repository directly into the ~/.config/opencode directory.
git clone git@github.com:apenlor/opencode-expert-mode.git ~/.config/opencodeThis repository provides different example configuration files. Choose the one that best fits your environment and copy it to opencode.json.
cd ~/.config/opencode
# Choose one of the following:
# cp opencode.geminicli.example.json opencode.json # For Google Gemini
# cp opencode.github.example.json opencode.json # For GitHub Copilot
# cp opencode.local.example.json opencode.json # For local llama-swap models
cp opencode.hybrid.example.json opencode.json # For mixed-provider setups
# Also copy the agents configuration template
cp AGENTS.example.md AGENTS.mdYou can now safely customize opencode.json and AGENTS.md without creating conflicts with future updates from this repository.
To ensure the configuration is correctly loaded:
-
Start a new OpenCode session:
opencode
-
Ask the agent about its mode:
What mode are you in?
It should confirm that it is in "Expert Mode." This verifies the rules are loading correctly.
-
Test a Command: Ask the agent to plan a simple task using a command.
/write-plan "create a hello world script in python" -
Confirm Behavior: The agent should respond with a structured implementation plan. This verifies that the commands and skills are working together correctly.
This configuration enables a structured, expert-guided development lifecycle using the provided commands.
Note: While these commands are convenient shortcuts, the skills are the true core of this configuration. They are designed to be used by any agent, enhancing its ability to reason and execute tasks effectively, regardless of how it's invoked.
- Design (
/brainstorm): Start by exploring an idea to solidify requirements./brainstorm "a web server that returns the current time" - Plan (
/write-plan): Generate a detailed, step-by-step implementation plan in the chat./write-plan "a simple python flask server with one endpoint /time" - Execute (
/execute-plan): Instruct the agent to begin implementing the plan from the chat context./execute-plan - Debug (
/debug): Instruct the agent to begin debugging some feature or change./debug "failing login test after auth refactor" - Review (
/review): Instruct the agent to review the code for a specific feature or change by using thecode-reviewersubagent./review "flask server implementation" - Review (
@code-reviewer): After work is complete, call the specialized code reviewer for feedback.@code-reviewer Please review the flask server implementation.
The central idea of Expert Mode is a "Skill-as-Core" architecture.
- Skills (
skills/): The heart of the project. They contain expert workflows that enhance any agent's ability to perform complex tasks. - Commands (
commands/): A user-facing "control panel" that provides convenient shortcuts to directly invoke specific skills. - The Agent: The agent is empowered by this ecosystem. Whether responding to a general prompt or a specific command, it can use its available skills and subagents to access these expert workflows when needed.
This configuration is composed of several key components that work together.
code-reviewer: A subagent for in-depth code and spec-compliance reviews. Invoke with@code-reviewer.implementer: Implements a single, well-defined task from a plan.plan-reviewer: Enforces standards and distills architectural plans into strict specs. Designed specifically for thelocalconfiguration where reasoning is decoupled from building.
When using the local configuration (opencode.local.example.json), OpenCode utilizes a decoupled pipeline to maximize hardware efficiency across three specialized local models. This setup is specifically optimized to work out-of-the-box with the llm-local-setup repository:
- The Architect (DeepSeek R1): Runs as the
planagent. It focuses 100% of compute on deep reasoning, exploring edge cases without being constrained by syntax. - The Refiner (Gemma 4): Runs as the
plan-reviewerprimary agent. It acts as a Logical Circuit Breaker, distilling the Architect's messy thoughts into strict, executable technical specifications. - The Builder (Qwen 3.6): Runs as the
buildandimplementeragents. Because logic is locked in, it dedicates its context window to syntactical perfection and repository-level integration.
A collection of expert workflows in the skills/ directory:
brainstorming: A structured process for exploring ideas and refining them into concrete designs (presented in-chat).completing-work: Verifies work is done and proposes a commit message before marking tasks complete.context7-mcp: Fetches up-to-date library and framework documentation via the Context7 MCP tool.executing-plans: A systematic way to execute implementation plans.systematic-debugging: A disciplined process for identifying and resolving bugs root-cause-first.test-driven-development: A guide for writing tests before implementation code.using-expert-mode: Establishes how to find and use skills (this is the core skill loaded on session start).writing-plans: Creates detailed, bite-sized implementation plans (presented in-chat).
User-facing shortcuts in the commands/ directory that invoke skills.
/brainstorm: Kicks off thebrainstormingskill./write-plan: Starts thewriting-plansskill./execute-plan: Begins theexecuting-plansskill./debug: Starts thesystematic-debuggingskill./review: Runs an isolated review using thecode-revieweragent.
Always-active instruction files in the rules/ directory provide constant guidance to the agent.
expert-mode.md: Establishes the Expert Mode identity, requiring explicit workflow guidance for substantive work.context7.md: Nudges the agent to use Context7 for up-to-date library and framework documentation when external APIs or frameworks matter.
This repository's root is designed to be your OpenCode configuration directory.
.
├── AGENTS.example.md # A template for your local agent rules.
├── agents/ # Definitions for specialized subagents (e.g., code-reviewer).
├── commands/ # User-facing slash commands that invoke skills.
├── opencode.geminicli.example.json # Gemini-only provider config example.
├── opencode.github.example.json # GitHub Copilot provider config example.
├── opencode.hybrid.example.json # Mixed provider config example (e.g., Gemini + GitHub Copilot).
├── rules/ # Always-active instruction files (e.g., Expert Mode, Context7).
├── skills/ # The core skills that define expert workflows.
└── tui.json # TUI-specific settings.