Your AI coding agent keeps breaking things? It's not the model — it's the context.
MetaSpec is a lightweight, editor-agnostic methodology that structures your project documentation so AI agents (Cursor, Copilot, Codex, Claude Code, etc.) stop guessing and start executing.
4 Markdown files. Zero dependencies. Drop it into any project.
AI coding agents fail in predictable ways — and none of them are about "intelligence":
| Failure Mode | Root Cause | What Happens |
|---|---|---|
| Style drift | Agent doesn't know your conventions | Generated code clashes with existing codebase |
| Contract breakage | Changed backend, forgot frontend | API mismatch, integration fails |
| Doc rot | Code changed, docs didn't | Next agent reads stale info, errors compound |
| Context overload | Flat docs dumped into context | Token waste, key info buried |
| Blind coding | Agent starts writing before reading | Reinvents existing utilities, ignores architecture |
These are not model problems. These are context engineering problems.
MetaSpec solves them with 10 principles, a 6-layer doc architecture, and a 5-phase development loop — all encoded in 4 files that any AI agent can read.
Instead of one giant AGENTS.md, organize docs into layers. Agents load only what they need:
┌───────────────┐
│ L1 Strategy │ WHY: Why this project exists
├───────────────┤
│ L2 Arch │ WHAT: System components & decisions
┌───┴───────────────┴───┐
│ L3 Contract │ HOW: API specs (single source of truth)
├───────────────────────┤
│ L4 Implementation │ HOW: Coding standards & workflows
┌───┴───────────────────────┴───┐
│ L5 Execution │ WHEN: Task specs & schedules
├───────────────────────────────┤
│ L6 Quality │ CHECK: Test plans & reports
└───────────────────────────────┘
| Task Type | Layers Needed | Files |
|---|---|---|
| Bug fix | L3 + L4 | 2-3 |
| Single-module feature | L3 + L4 + L5 | 4-6 |
| Cross-module feature | L2 + L3 + L4 + L5 | 6-10 |
| Architecture decision | L1 + L2 | 3-5 |
Bug fix = 2 files. Not your entire docs/ folder.
Every task follows the same loop. No phase can be skipped:
Phase 0 Phase 1 Phase 2 Phase 3 Phase 4
Task Anchor → Context Load → Design First → Atomic Impl → Verify & Sync
│
┌──────────┘
▼
Back-patch docs if
implementation diverged
The key insight: Phase 4 feeds back to Phase 2. If the implementation deviated from the design, you fix the docs before marking done. This prevents doc rot at the source — not with periodic cleanup, but as part of every task.
Global workflow defines the base rules. Each module inherits and specializes:
Global workflow.md (base: 5-phase loop + universal rules)
├── frontend/workflow.md (inherits + adds: component standards, npm build)
├── backend/workflow.md (inherits + adds: DTO rules, mvn package)
└── {module}/workflow.md (inherits + adds: module-specific constraints)
Rules: inherit all phases (can't delete any), specialize by adding (never contradicting). One file per module gives the agent a complete instruction set.
git clone https://github.com/glowdan/metaspec.git ~/.skills/metaspecThen reference it as a skill in your editor. The agent reads MetaSpec's principles before starting any task.
Copy the 4 core files into your project's docs/ and adapt:
cp principles.md layers.md workflow.md SKILL.md /path/to/your-project/docs/metaspec/A new project needs just 3 files to get MetaSpec running:
docs/
├── brief.md # L1: What is this project and why
├── workflow.md # L4: How we work (inherits MetaSpec's 5-phase loop)
└── tasks/
└── README.md # L5: What to build
| # | Principle | One-liner |
|---|---|---|
| 1 | Docs as Truth | Change docs first, then code. Always. |
| 2 | Layered Context | 6-layer pyramid, load by task type |
| 3 | Standard Loop | 5-phase loop, no skipping |
| 4 | Contract Sync | Change matrix ensures nothing is forgotten |
| 5 | Workflow Inheritance | Global base + per-module specialization |
| 6 | Task as Spec | Tasks include tech approach + acceptance criteria |
| 7 | Explicit Anti-patterns | Every workflow lists what's forbidden |
| 8 | Self-describing Navigation | READMEs with reading order, not just file lists |
| 9 | Progressive Disclosure | Map first, details later |
| 10 | Data Exemplification | Pair every schema with a concrete sample |
Full details: principles.md
OpenAI's engineering team learned this the hard way:
"We tried the 'one big AGENTS.md' approach. Predictably, it was a failure: context is a scarce resource. A huge instruction file crowds out tasks, code, and relevant docs."
MetaSpec was designed from the start to avoid this trap:
- Progressive disclosure: AGENTS.md is a table of contents (~100 lines), not an encyclopedia
- Layered loading: agents fetch only the layers relevant to their current task
- Context unloading priority: when the window fills up, drop L1 first (most stable), keep L3+L4 (coding essentials)
metaspec/
├── README.md # This file — overview & quick start
├── README_zh-CN.md # Chinese translation of this README
├── SKILL.md # Skill entry point for AI editors
├── AGENTS.md # Agent execution rules for this repo
├── principles.md # 10 design principles + 5 meta-patterns
├── layers.md # 6-layer architecture & loading strategy
└── workflow.md # 5-phase loop, inheritance, contract sync
Reading order: principles.md → layers.md → workflow.md
- Single file: aim for 300 lines, hard cap at 500
- Directory depth: 2-3 levels max
- Contract changes: docs and code in the same commit
| Setup | How MetaSpec Fits |
|---|---|
| Monorepo | Root docs/ covers all modules, each module has its own workflow.md |
| Multi-repo | Each repo has docs/, one repo owns the API contracts |
| Any editor | Editor-agnostic — works with Cursor, Copilot, Codex, Claude Code, Windsurf, or plain text |
| Any language | Language-agnostic — the methodology applies to any tech stack |
Good fit: Teams using AI agents for development. Full-stack projects. Long-running codebases. Projects where new people (human or AI) frequently join.
Not a fit: Throwaway scripts. Solo weekend hacks. Teams that don't maintain docs (MetaSpec won't help if no one follows it).
MIT