Skip to content

guidefanti/claude-code-dev-guardrails

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Claude Code Dev Guardrails

A structured skill for Claude Code that transforms AI-assisted development from "generate code that compiles" into "build features that make sense within a real product."


The Problem

AI coding tools are excellent at writing code. They are terrible at thinking about the code they write.

When you ask an AI to "add a settings page," it creates one — without checking if a preferences screen already exists, without questioning whether the navigation makes sense, without testing if the feature actually works, and without considering what happens to production data.

The result: code that compiles but creates fragmented products, duplicated features, orphaned screens, and functionality that was never actually verified beyond "no type errors."

Dev Guardrails fixes this. It's a set of mandatory rules and processes that the AI must follow before, during, and after every development task — turning it from an obedient code generator into something closer to a senior engineer who thinks before acting.


What It Does

Forces the AI through a disciplined 5-phase process for every task:

Phase 0 — Product Thinking. Before writing any code, the AI maps the existing product structure, checks for overlapping features, audits terminology consistency, and flags incoherent requests. If adding a "Settings" page would conflict with an existing "Preferences" overlay, the AI says so before implementing.

Phase 1 — Understand Before Acting. The AI reads all affected files, identifies ambiguities, estimates complexity, and drafts a plan with specific files, layers, and potential side effects. If the request is unclear, it asks instead of guessing.

Phase 2 — Implementation Discipline. Surgical edits (not file rewrites), security by default (input validation, auth checks, no hardcoded secrets), performance awareness (N+1 queries, unnecessary re-renders), data migration planning (what happens to existing production data?), and concurrency checks (what if two webhooks fire simultaneously?).

Phase 3 — Real Testing. This is where most AI sessions fail. Instead of just running tsc --noEmit and declaring victory, the AI verifies at 5 levels: static analysis → build → functional validation (does it actually work?) → user flow validation (can a user find and use it?) → regression check (did existing features break?). The AI is also instructed to write tests for new features, not just run existing ones.

Phase 4 — Documentation. Update changelogs, flag breaking changes, add TODO entries for placeholder features, and summarize what was done with specific verification results.


What's Inside

dev-guardrails/
├── SKILL.md                        # Main skill file — 5 phases + session protocol
└── references/
    ├── anti-patterns.md            # 20 cataloged failure patterns in 3 categories
    ├── checklists.md               # 10 concrete checklists for every stage
    ├── product-thinking.md         # Product/UX thinking framework
    └── testing-strategy.md         # 5-level verification hierarchy

anti-patterns.md catalogs 20 recurring AI mistakes:

  • Code-level (12): silent deletions, phantom APIs, unsolicited refactors, happy-path-only implementations, type escape hatches, and more.
  • Product-level (5): tunnel vision, Frankenstein UIs, silent assumptions, orphan features, shallow execution.
  • Infrastructure (3): naked schema changes, race condition blind spots, the "tsc-is-enough" fallacy.

Each pattern includes: what happens, why it happens, and the exact corrective behavior.


Installation

1. Add the skill to your project

Copy the dev-guardrails/ folder into your Claude Code skills directory:

# Option A: Project-level (recommended — version-controlled with your repo)
cp -r dev-guardrails/ your-project/.claude/skills/dev-guardrails/

# Option B: Global (shared across all projects)
cp -r dev-guardrails/ ~/.claude/skills/dev-guardrails/

2. Make it mandatory via CLAUDE.md

Add the following to your project's CLAUDE.md (create it at the project root if it doesn't exist):

## Mandatory Skills

Before ANY development task — coding, debugging, refactoring, creating files, or any technical
implementation — you MUST read and follow `.claude/skills/dev-guardrails/SKILL.md` and its
reference files. This is non-negotiable and applies to every task, including "quick fixes."

Do NOT write any code before completing Phase 0 (Product Thinking) and Phase 1 (Understand
Before Acting) from the dev-guardrails skill.

This ensures the AI reads the guardrails automatically at the start of every session. Without this step, you'd have to manually reference the skill each time.

3. Verify it works

Start a new Claude Code session and give it any development task. The AI should:

  • Read the skill files before doing anything
  • Ask clarifying questions if the request is ambiguous
  • Present a plan before implementing
  • Run multiple levels of verification after implementing

If it jumps straight into coding without reading the skill, check that the path in CLAUDE.md matches where you placed the files.


How It Compares to Prompting

You might be thinking: "Can't I just tell the AI to be more careful?"

You can. It won't stick. Here's why:

  • Prompts are forgotten. In long sessions, earlier instructions get pushed out of the AI's effective context. A skill file is re-read at the start of every task.
  • Prompts are vague. "Think about the product" is a suggestion. "Map all existing navigation paths, identify overlapping features, verify terminology consistency, and flag inconsistencies before implementing" is a process.
  • Prompts aren't cumulative. Dev Guardrails is the result of cataloging dozens of real failure patterns across production projects. Each anti-pattern, checklist item, and verification step addresses a specific, observed failure mode.

FAQ

Does this work with projects in any language/framework? Yes. The guardrails are language-agnostic and framework-agnostic. They apply to React, Python, Rust, n8n workflows, Supabase, or anything else. There are specific checklists for common stacks (n8n, Supabase), but the core process works universally.

Will this slow down the AI? Yes, slightly. The AI will ask more questions, read more files, and run more verification steps. This is the point. The time "lost" on discipline is recovered many times over by not having to debug broken features, undo regressions, or restructure fragmented UIs.

Can I customize it for my project? Absolutely. Add project-specific checklists, anti-patterns, or terminology standards to the reference files. The structure is modular — you can extend any file without affecting the others.

Does this work with other AI coding tools (Cursor, Copilot, etc.)? The skill format (.md files read by the AI) is specific to Claude Code. However, the content — the anti-patterns, checklists, and processes — can be adapted into system prompts or rules files for other tools. The principles are universal.

How do I contribute? See CONTRIBUTING.md.


License

MIT

About

Structured skill for Claude Code that enforces disciplined, product-aware development. 20 anti-patterns, 5-level testing hierarchy, and product thinking framework.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors