Skip to content

friday-james/quant-review-claude-skill

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Quant Review Claude Skill

A Claude Code skill that rigorously audits trading strategies, backtest code, and quantitative finance systems for correctness, realism, and production-readiness.

What It Catches

  • Look-Forward (Look-Ahead) Bias — future data leaking into past decisions (shift(-1), bfill(), full-column statistics, point-in-time violations)
  • Long/Short Asymmetry — return calculations, position sizing, or cost modeling that only works for one direction
  • Backtest vs Live Divergence — duplicated signal logic, unrealistic fill assumptions, state management gaps
  • Overfitting & Data SnoopingKFold on time-series, too many parameters, no walk-forward validation
  • Unrealistic Transaction Costs — zero slippage, missing bid-ask spread, no market impact modeling
  • Survivorship & Selection Bias — current index membership applied historically, delisted stocks missing
  • Time-Series & Statistical Pitfalls — non-stationary features, incorrect annualization, regime blindness

Installation

Quick Install (Recommended)

curl -fsSL https://raw.githubusercontent.com/friday-james/quant-review-claude-skill/main/install.sh | bash

Or with wget:

wget -qO- https://raw.githubusercontent.com/friday-james/quant-review-claude-skill/main/install.sh | bash

Manual Install

mkdir -p ~/.claude/skills/quant-review
curl -o ~/.claude/skills/quant-review/SKILL.md https://raw.githubusercontent.com/friday-james/quant-review-claude-skill/main/skills/quant-review/SKILL.md

Or clone the repository:

git clone https://github.com/friday-james/quant-review-claude-skill.git
cd quant-review-claude-skill
mkdir -p ~/.claude/skills/quant-review
cp skills/quant-review/SKILL.md ~/.claude/skills/quant-review/

Then restart Claude Code to load the skill.

Usage

Invoke the skill in Claude Code:

/quant-review

The reviewer will audit your code following this order:

  1. Architecture audit — Is backtest and live code properly unified?
  2. Data pipeline audit — Point-in-time correctness, no future data leakage
  3. Signal generation audit — Shift directions, rolling window boundaries
  4. Execution model audit — Fill assumptions, slippage, costs
  5. Evaluation audit — Proper train/test splits, walk-forward validation
  6. Risk management audit — Position sizing, drawdown limits

Each issue is reported with a severity level:

  • CRITICAL — Will cause P&L discrepancy between backtest and live
  • HIGH — Significant bias that inflates backtest performance
  • MEDIUM — Suboptimal practice that may cause issues at scale
  • LOW — Best-practice suggestion for maintainability

Example

Given code like:

df['signal'] = df['price'].pct_change().shift(-1)
df['zscore'] = (df['price'] - df['price'].mean()) / df['price'].std()

The reviewer will flag:

[CRITICAL] LOOK-FORWARD BIAS: strategy.py:12
shift(-1) leaks tomorrow's return into today's signal. Use shift(1).
→ Fix: df['signal'] = df['price'].pct_change().shift(1)

[CRITICAL] LOOK-FORWARD BIAS: strategy.py:13
Full-column mean/std uses future data. Use expanding or rolling window.
→ Fix: df['zscore'] = (df['price'] - df['price'].expanding().mean()) / df['price'].expanding().std()

Uninstall

rm -rf ~/.claude/skills/quant-review/

Then restart Claude Code.

Contributing

PRs welcome. If you've been burned by a silent quant bug that isn't covered, open an issue.

License

MIT License

About

Claude Code skill for reviewing trading and quantitative finance code — catches look-forward bias, long/short asymmetry, backtest-live divergence, overfitting, and more.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages