30 reusable PM skills for Claude Code — repeatable decision-making workflows that produce structured artifacts (scorecards, frameworks, roadmaps) grounded in attributed insights from product leaders.
┌─────────────────────────────────────────────────────────────┐
│ Your Company Repo │
│ │
│ context/ .claude/skills/ applied/ │
│ ├── company/ ├── customer-discovery ├── strategic│
│ ├── competitive/ ├── feature-prioritization├── planning │
│ ├── products/ ├── north-star-metrics └── ... │
│ ├── verticals/ ├── ... (30 skills) │
│ └── signals/ └── ... (+ company skills) │
│ ▲ │ │ │
│ │ ▼ │ │
│ collectors/ skill-graph.yaml │ │
│ (auto-refresh) (dependency map) │ │
│ │
│ pm-playbooks/ ←── git submodule ────────────────── │
└─────────────────────────────────────────────────────────────┘
git clone https://github.com/ashstep2/pm-playbook.git
cd pm-playbooks
# Open in Claude Code and invoke any skill by nameNo configuration required. Skills work out of the box by asking for context interactively. Add a context/ directory to ground outputs in your company's data.
New here? Start with the Getting Started Guide — pick a journey that matches your situation and run it. Or browse example outputs to see what skills produce.
| Generic prompts | Consulting frameworks | pm-playbooks | |
|---|---|---|---|
| Structured output | Free-form text | PDF decks | Markdown artifacts with sections, tables, scorecards |
| Attributed principles | "Best practice says..." | Proprietary IP | Named sources with direct quotes and URLs |
| Context-aware | Starts from zero | Manual input | Reads context/ directory, fills gaps conversationally |
| Composable | One-shot | Siloed engagements | Skills chain via skill-graph.yaml |
| Evolving | Static | Annual updates | /learn ingests new content, /improve reflects on runs |
| Signal-fed | No live data | Expensive research | Auto-collects from GitHub, HN, Reddit, news APIs |
| Skill | What It Produces |
|---|---|
| product-portfolio-strategy | Multi-product roadmap + resource allocation across bets |
| measuring-product-market-fit | PMF scorecard + signal tracking + pivot/persevere framework |
| competitive-response | Threat assessment + response playbook + moat analysis |
| research-to-product-pipeline | Translation framework: research breakthroughs → shipped products |
| platform-vs-application | Build/buy/partner analysis + platform economics + ecosystem design |
| vertical-market-assessment | TAM/SAM/SOM + vertical prioritization matrix + entry strategy |
| product-narrative-strategy | Product vision doc + strategy narrative + roadmap story + customer context brief |
| Skill | What It Produces |
|---|---|
| zero-to-one-product-launch | Launch checklist + channel strategy + success metrics + rollback plan |
| feature-prioritization | Weighted scoring matrix + sequenced roadmap + trade-off analysis |
| pricing-and-monetization | Pricing model analysis + willingness-to-pay + packaging strategy |
| north-star-metrics | Metric tree + leading/lagging indicators + dashboard design |
| writing-prds-for-ai | PRD template + uncertainty handling + eval criteria for AI outputs |
| experiment-design | Experiment briefs + statistical plans + growth loop design + results synthesis |
| Skill | What It Produces |
|---|---|
| customer-discovery | Interview guide + synthesis framework + discovery report |
| developer-experience-audit | Friction scorecard + prioritized DX recommendations |
| user-onboarding-optimization | Onboarding flow audit + activation metrics + improvement roadmap |
| stakeholder-alignment | RACI matrix + communication plan + research-product interface |
| prototype-driven-validation | Prototype brief + variation matrix + customer test plan + build/kill decision |
| iteration-cadence-design | Cadence architecture + weekly PM calendar + ritual design + adaptation triggers |
| Skill | What It Produces | Persona |
|---|---|---|
| agent-surface-audit | Agent accessibility scorecard + surface inventory + remediation roadmap | Developer / All |
| mcp-design-review | Per-tool scorecard + security audit + description quality assessment | Developer |
| agent-journey-mapping | Agent journey maps + I/O contracts + failure mode analysis | Developer / Ops |
| agent-ready-gtm | Agent buyer journey + trust center API + pricing surface design | Enterprise Buyer |
| agent-integration-design | I/O specs + webhook design + orchestration compatibility matrix | Ops / Workflow |
| agent-consumer-experience | Agent interaction map + consent framework + preference API | End User |
| Skill | What It Produces |
|---|---|
| user-segmentation | Segment profiles + prioritization matrix + cross-segment insights |
| ecosystem-health | Ecosystem scorecard + benchmark comparison + growth playbook |
| api-design-review | Per-endpoint assessment + standards compliance + agent readiness |
| partnership-evaluation | Partner scorecards + deal structures + comparative assessment |
| product-quality-review | Quality scorecard + taste gap analysis + competitive craft comparison + quality roadmap |
Skills feed into each other. After completing a skill, the system suggests what to run next based on skill-graph.yaml.
graph LR
subgraph Strategic
PPS[product-portfolio-strategy]
PMF[measuring-product-market-fit]
CR[competitive-response]
RTP[research-to-product-pipeline]
PVA[platform-vs-application]
VMA[vertical-market-assessment]
PNS[product-narrative-strategy]
end
subgraph Planning
ZTO[zero-to-one-product-launch]
FP[feature-prioritization]
PM[pricing-and-monetization]
NSM[north-star-metrics]
PRD[writing-prds-for-ai]
EXP[experiment-design]
end
subgraph Execution
CD[customer-discovery]
DXA[developer-experience-audit]
UOO[user-onboarding-optimization]
SA[stakeholder-alignment]
PDV[prototype-driven-validation]
ICD[iteration-cadence-design]
end
subgraph Agent-First
ASA[agent-surface-audit]
MDR[mcp-design-review]
AJM[agent-journey-mapping]
AGTM[agent-ready-gtm]
AID[agent-integration-design]
ACE[agent-consumer-experience]
end
subgraph Analysis
US[user-segmentation]
EH[ecosystem-health]
ADR[api-design-review]
PE[partnership-evaluation]
PQR[product-quality-review]
end
CD --> US --> PMF --> ZTO
CR --> FP
CR --> PPS
CR --> ASA
EH --> CR
EH --> PE
EH --> DXA
NSM --> FP
PPS --> FP
PPS --> RTP
FP --> PRD --> ZTO
FP --> ZTO
ZTO --> UOO
ZTO --> SA
VMA --> CD
DXA --> ADR
DXA --> ASA
ADR --> MDR
ADR --> ASA
PE --> PPS
RTP --> PRD
ASA --> MDR
ASA --> AJM
ASA --> AGTM
MDR --> AJM
MDR --> AID
AJM --> AID
AJM --> ACE
AGTM --> ZTO
ACE --> UOO
CD --> PDV
FP --> PDV
PDV --> PRD
PDV --> SA
PDV --> ZTO
PDV --> EXP
PNS --> SA
PNS --> ZTO
PNS --> FP
CD --> PNS
CR --> PNS
NSM --> EXP
EXP --> PMF
EXP --> UOO
ICD --> PDV
ICD --> EXP
DXA --> PQR
PQR --> FP
PQR --> PDV
git clone https://github.com/ashstep2/pm-playbook.git
# Open in Claude Code → invoke any skill by nameSkills ask for context interactively. No setup required.
# From your project root
git submodule add https://github.com/ashstep2/pm-playbook.git
bash pm-playbooks/scaffold/install.shThis symlinks all 30 skills into .claude/skills/ and generates a CLAUDE.md for your project.
git submodule add https://github.com/ashstep2/pm-playbook.git
bash pm-playbooks/scaffold/install.sh
# Fill in company context
mkdir -p context/{company,competitive,products,verticals,founders,signals}
# Add markdown files to each directory (see scaffold/ for templates)
# Set up signal collectors (optional)
cp pm-playbooks/signals.yaml.example signals.yaml
# Edit signals.yaml, add API keys to .env
pip install pyyaml requests PyGithub praw
cd pm-playbooks && python3 -m collectors.run --config ../signals.yamlSkills work at every level — more context produces more grounded output, but nothing breaks without it.
| Setup Level | Behavior |
|---|---|
No context/ |
Skills ask for information interactively |
Partial context/ |
Use what's available, ask for the rest |
Full context/ + signals |
Fully grounded, no questions needed |
Two meta-skills in _meta/ help the system evolve:
/learn— Ingest an article, podcast transcript, or book excerpt. Extracts insights, filters for novelty, and writes improvement proposals to_meta/proposals/for human review. Proposals are never auto-applied./improve— Run after any skill to score its effectiveness (Instruction Clarity, Context Sufficiency, Artifact Usefulness) and propose one concrete improvement. Reflections accumulate inapplied/_reflections.mdand surface recurring patterns.
See CONTRIBUTING.md for how proposals become PRs.
The collectors/ framework auto-collects public data into context/signals/.
| Collector | Auth Required | What It Collects |
|---|---|---|
hackernews |
None | HN stories matching configured keywords |
github |
GITHUB_TOKEN |
Repo stars, forks, issues, PRs for tracked repos |
reddit |
REDDIT_CLIENT_ID + REDDIT_CLIENT_SECRET |
Posts from configured subreddits |
news |
NEWS_API_KEY |
Articles from NewsAPI |
cd pm-playbooks && python3 -m collectors.run --config ../signals.yamlConfig lives in signals.yaml at your repo root. See signals.yaml.example for format.
Each skill follows the Anthropic Agent Skills spec. See SKILL_FORMAT.md for the full reference. Skills include:
- Core Principles — attributed insights with direct quotes and source URLs
- Instructions — step-by-step workflow producing concrete artifacts
- Diagnostic Questions — assess the situation before diving in
- Common Mistakes — what PMs get wrong and how to avoid it
- Context Integration — how the skill uses company data
See CONTRIBUTING.md for how to add or improve skills.
MIT