forked from pedrohcgs/claude-code-my-workflow
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathworkflow-guide.qmd
More file actions
2380 lines (1715 loc) · 120 KB
/
workflow-guide.qmd
File metadata and controls
2380 lines (1715 loc) · 120 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
title: "My Claude Code Setup"
subtitle: "A Comprehensive Guide to AI-Assisted Academic Workflows: Slides, Papers, Analysis, and Beyond"
author: "Pedro H. C. Sant'Anna"
date: "2026-03-20"
date-modified: last-modified
format:
html:
toc: true
toc-depth: 3
toc-location: left
number-sections: true
theme:
- cosmo
- custom.scss
code-copy: true
code-overflow: wrap
highlight-style: github-dark
smooth-scroll: true
self-contained: true
link-external-newwindow: true
include-in-header:
text: |
<style>
:not(pre) > code { background-color: rgba(185,151,91,0.1) !important; color: #8b6914 !important; padding: 0.15em 0.45em !important; border-radius: 3px !important; font-size: 0.87em !important; }
pre code { background-color: transparent !important; color: inherit !important; padding: 0 !important; font-size: inherit !important; border-radius: 0 !important; }
body:not(.floating):not(.docked) .page-columns.toc-left {
grid-template-columns: [screen-start] 1.5em [screen-start-inset] 5fr [page-start] 35px [page-start-inset] minmax(0px,175px) [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(450px,calc(1100px - 3em)) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px,200px) [page-end-inset] 35px [page-end] 5fr [screen-end-inset] 1.5em [screen-end] !important;
}
</style>
---
# Why This Workflow Exists {#sec-why}
## The Problem
If you've ever done serious academic work --- built lecture slides, drafted a research paper, run a data analysis pipeline --- you know the pain:
- **Context loss between sessions.** You pick up where you left off, but Claude doesn't remember *why* you chose that notation, *what* the instructor approved, or *which* bugs were fixed last time.
- **Quality is inconsistent.** One slide has perfect spacing; the next overflows. One regression table has proper formatting; the next is missing standard errors. Citations compile in Overleaf but break locally.
- **Review is manual and exhausting.** You proofread 140 slides by hand. You re-read your paper for the fifth time looking for the same kinds of errors. You miss a typo in an equation. A student or referee catches it.
- **No one checks the math.** Grammar checkers catch "teh" but not a flipped sign in a decomposition theorem, a misspecified regression, or a broken replication.
This workflow solves all of these problems. You describe what you want --- "translate Lecture 5 to Quarto," "review my paper before submission," or "analyze this dataset and produce publication-ready tables" --- and Claude handles the rest: plans the approach, implements it, runs [specialized reviewers](#sec-system), fixes issues, verifies quality, and presents results. Like a contractor who manages the entire job.
## What Makes Claude Code Different
Claude Code runs on your computer with full access to your file system, terminal, and git. It works as a **CLI tool**, a **VS Code extension**, or through the **Claude Desktop app** --- same capabilities, same configuration, different interface. Here is what that enables:
| Capability | What It Means for You |
|-----------|----------------------|
| Read & edit your files | Surgical edits to `.tex`, `.qmd`, `.R` files in place |
| Run shell commands | Compile LaTeX, run R scripts, render Quarto --- directly |
| Access git history | Commits, PRs, branches --- all from the conversation |
| Persistent memory | CLAUDE.md + MEMORY.md survive across sessions |
| Orchestrator mode | Claude autonomously plans, implements, reviews, fixes, and verifies |
| Multi-agent workflows | 10 specialized agents for proofreading, layout, pedagogy, code review |
| Quality gates | Automated scoring --- nothing ships below 80/100 |
| CLI/headless mode | Run from scripts: `claude -p "compile all lectures"` |
| Browser bridge | Continue sessions on your phone via `/remote-control` |
::: {.callout-tip}
## Case Study: Econ 730 at Emory
This workflow was developed over 6+ sessions building a PhD course on Causal Panel Data. The result: 6 complete lectures (140+ slides each), with Beamer + Quarto versions, interactive Plotly charts, TikZ diagrams, and R replication scripts --- all managed by the orchestrator and reviewed by 10 specialized agents across 5 quality dimensions.
While this case study centers on slides, every component --- agents, orchestrator, quality gates --- works identically for research papers, data analysis, and proposals.
:::
## How It All Works Together
Before diving into setup, here is the key insight: **most of this workflow is automatic**. You describe what you want in plain English, and Claude figures out which tools to use.
### What You Do vs What Happens Automatically
| You Do | Happens Automatically |
|--------|----------------------|
| Describe what you want | Claude selects and runs the right skills |
| Approve plans | Orchestrator coordinates agents |
| Review final output | Hooks fire on events (edit, save, compact) |
| Say "commit" when ready | Rules load based on files you touch |
### Example: "Fix my slides before tomorrow"
```
You: "Review my lecture slides and fix all issues before tomorrow's class"
↓
Claude automatically:
→ Runs /proofread (grammar, typos, consistency)
→ Runs /visual-audit (overflow, layout, spacing)
→ Runs /pedagogy-review (narrative flow, notation clarity)
→ Synthesizes findings into prioritized fix list
→ Applies fixes (critical → major → minor)
→ Re-verifies everything compiles
→ Scores against quality gates
↓
You see: "Done. Fixed 12 issues. Score: 88/100. Ready to commit?"
You: "Yes"
↓
Claude runs /commit (only because you explicitly approved)
```
::: {.panel-tabset}
### Paper Review
```
You: "Review my paper draft and prepare it for submission"
↓
Claude automatically:
→ Runs /review-paper (argument structure, methods, citations)
→ Runs /proofread (grammar, consistency, formatting)
→ Runs /validate-bib (cross-reference citations)
→ Synthesizes findings into prioritized fix list
→ Applies fixes (critical → major → minor)
→ Re-verifies everything compiles
→ Scores against quality gates
↓
You see: "Done. Fixed 18 issues. Score: 91/100. Ready to commit?"
```
### Data Analysis
```
You: "Analyze this dataset and produce publication-ready output"
↓
Claude automatically:
→ Runs /data-analysis (explore, model, tables, figures)
→ Runs /review-r (code quality, reproducibility)
→ Produces R script, tables, and figures
→ Scores against quality gates
↓
You see: "Analysis complete. 3 tables, 4 figures. Score: 85/100."
```
:::
### What You Never Touch Directly
- **Agents** --- Specialized reviewers called by skills, not by you
- **Hooks** --- Fire automatically on events (you never run them)
- **Rules** --- Load automatically based on file paths
### Skills: The Only Commands You Might Type
Skills like `/proofread` or `/compile-latex` can be invoked two ways:
1. **Explicitly** --- You type `/proofread MySlides.tex`
2. **Automatically** --- Claude invokes them when relevant to your request
Most of the time, you just describe what you want and Claude handles the rest. Explicit skill invocation is there when you want precise control.
::: {.callout-important}
## The Bottom Line
**You talk, Claude orchestrates.** The 10 agents, 22 skills, and 18 rules exist so you don't have to think about them. Describe your goal, approve the plan, and let the system work.
:::
::: {.callout-note}
## You Don't Need All of This on Day One
This guide describes the full system --- 10 agents, 22 skills, 18 rules. That is the ceiling, not the floor. **Start with just CLAUDE.md and 2--3 skills** (`/compile-latex`, `/proofread`, `/commit`). Add rules and agents as you discover what you need. The template is designed for progressive adoption: fork it, fill in the placeholders, and start working. Everything else is there when you're ready.
:::
---
# Getting Started {#sec-setup}
You need three things: install Claude Code, fork the repo, and paste a prompt. Claude handles everything else.
## Prerequisites {#sec-prerequisites}
| Requirement | What It Is | How to Get It |
|-------------|-----------|---------------|
| **Claude Code** | The AI tool that powers this workflow | `curl -fsSL https://claude.ai/install.sh | bash` ([docs](https://code.claude.com/docs/en/overview)) |
| **Node.js 18+** | Required to run Claude Code | [nodejs.org](https://nodejs.org) |
| **Claude account** | Authentication for Claude | [Claude Pro or Max subscription](https://claude.ai) or [Anthropic API key](https://console.anthropic.com) (pay per token) |
| **git** | Version control (for fork/clone) | Pre-installed on Mac; `brew install git` or [git-scm.com](https://git-scm.com) |
**Optional** (install only what your project uses):
| Tool | Required For | Install |
|------|-------------|---------|
| XeLaTeX | LaTeX slides | [TeX Live](https://tug.org/texlive/) or [MacTeX](https://tug.org/mactex/) |
| [Quarto](https://quarto.org) | Web slides | [quarto.org/docs/get-started](https://quarto.org/docs/get-started/) |
| R | Figures & data analysis | [r-project.org](https://www.r-project.org/) |
| [gh CLI](https://cli.github.com/) | PR workflow | `brew install gh` (macOS) |
::: {.callout-note}
## What Does This Cost?
**Claude Pro or Max subscription**: Includes Claude Code usage with generous limits. Check [claude.ai](https://claude.ai) for current pricing and tier details. Best for most academics.
**API access** (pay per token): For heavy users or CI/CD. Use [effort levels](#sec-effort) and the [multi-model strategy](#multi-model-strategy-cost-vs-quality) to control costs.
Claude Code itself is free and open source. You pay only for the Claude model usage.
:::
::: {.callout-tip}
## Day 1 Checklist
- [ ] Install Claude Code: `curl -fsSL https://claude.ai/install.sh | bash`
- [ ] Fork the repo and clone it locally
- [ ] Run `claude` in the project directory (first run will prompt for authentication)
- [ ] Paste the starter prompt (fill in your project details)
- [ ] Wait for Claude to customize `CLAUDE.md` for your project
- [ ] Approve the configuration plan
- [ ] Ask Claude to do something: *"Review my slides"* or *"Create a lecture on [topic]"*
- [ ] Approve the task plan, watch it work, review the output
That's it. Everything else — agents, hooks, rules — runs automatically in the background.
:::
::: {.callout-tip collapse="true"}
## Trouble on Day 1?
**"command not found: claude":** Install Claude Code first: `curl -fsSL https://claude.ai/install.sh | bash`. Requires Node.js 18+ (`node --version` to check).
**Claude seems to ignore the configuration files:** Make sure you ran `claude` from inside the project directory (not a parent folder). Claude reads `.claude/` and `CLAUDE.md` from the current working directory.
**Hooks not firing (no notifications, no reminders):** Check that Python 3 is installed (`python3 --version`) and hook files are executable (`chmod +x .claude/hooks/*`).
**"What does plan approval look like?"** Claude presents a numbered plan and asks for your input. Say "approved", "looks good", or "revise step 3". That's it --- no special commands needed.
For more, see [Troubleshooting](#troubleshooting) in the Appendix.
:::
## Step 1: Fork & Clone
```bash
# Fork this repo on GitHub (click "Fork" on the repo page), then:
git clone https://github.com/YOUR_USERNAME/claude-code-my-workflow.git my-project
cd my-project
```
Replace `YOUR_USERNAME` with your GitHub username.
## Step 2: Start Claude Code and Paste This Prompt {#sec-first-session}
Open your terminal in the project directory, run `claude`, and paste the following. Fill in the **bolded placeholders** with your project details:
::: {.callout-note collapse="true"}
## Using VS Code or Claude Desktop instead of the terminal?
Everything in this guide works the same in any Claude Code interface. In **VS Code**, open the Claude Code panel (click the Claude icon in the sidebar or press Cmd+Shift+P → "Claude Code: Open"). In **Claude Desktop**, open your project folder and start a local session. Then paste the starter prompt below.
The guide shows terminal commands because they are the most universal way to explain things, but every skill, agent, hook, and rule works identically regardless of which interface you use.
:::
::: {.callout-tip appearance="simple"}
## Starter Prompt
I am starting to work on **[PROJECT NAME]** in this repo. **[Describe your project in 2--3 sentences --- what you're building, who it's for, what tools you use (e.g., LaTeX/Beamer, R, Quarto).]**
I want our collaboration to be structured, precise, and rigorous --- even if it takes more time. When creating visuals, everything must be polished and publication-ready. I don't want to repeat myself, so our workflow should be smart about remembering decisions and learning from corrections.
I've set up the Claude Code academic workflow (forked from `pedrohcgs/claude-code-my-workflow`). The configuration files are already in this repo (`.claude/`, `CLAUDE.md`, templates, scripts). Please read them, understand the workflow, and then **update all configuration files to fit my project** --- fill in placeholders in `CLAUDE.md`, adjust rules if needed, and propose any customizations specific to my use case.
After that, use the plan-first workflow for all non-trivial tasks. Once I approve a plan, switch to contractor mode --- coordinate everything autonomously and only come back to me when there's ambiguity or a decision to make. For our first few sessions, check in with me a bit more often so I can learn how the workflow operates.
Enter plan mode and start by adapting the workflow configuration for this project.
:::
**What this does:** Claude will read `CLAUDE.md` and all the rules, fill in your project name, institution, Beamer environments, CSS classes, and project state table, then propose any rule adjustments for your specific use case. You approve the plan, and Claude handles the rest. From there, you just describe what you want to build.
## Optional: Manual Setup
If you prefer to configure things yourself instead of letting Claude handle it:
::: {.callout-note collapse="true"}
## Manual Configuration Steps (click to expand)
**Customize CLAUDE.md** --- Open `CLAUDE.md` and replace all `[BRACKETED PLACEHOLDERS]`:
1. **Project name and institution**
2. **Folder structure** (adjust to your layout)
3. **Current project state** (your lectures/papers)
4. **Beamer environments** (your custom LaTeX environments)
5. **CSS classes** (your Quarto theme classes)
**Create your knowledge base** --- Open `.claude/rules/knowledge-base-template.md` and fill in:
1. **Notation registry** --- every symbol you use, where it's introduced, and anti-patterns
2. **Applications database** --- datasets, papers, and R packages you reference
3. **Design principles** --- what you've approved and what you've overridden
**Configure permissions** --- Review `.claude/settings.json`. The template includes permissions for git, LaTeX, Quarto, R, and utility scripts. Add any additional tools you use.
**Test it:**
```bash
# In Claude Code, type:
/compile-latex MyFirstLecture
/proofread Slides/MyFirstLecture.tex
python scripts/quality_score.py Slides/MyFirstLecture.tex
```
:::
You don't need to fill everything in upfront. Start with 5--10 notation entries and add more as you develop lectures. The starter prompt will set up the essentials --- you can always refine later.
## Requirements Specification (For Complex Tasks) {#sec-spec-then-plan}
For complex or ambiguous tasks, Claude may ask 3-5 clarifying questions to create a requirements specification before planning. This catches ambiguity early and reduces rework.
::: {.panel-tabset}
### Slide Creation
```
You: "Improve my lecture on hypothesis testing"
Claude: "Let me clarify:
1. What needs improvement (clarity, examples, pacing)?
2. Any student feedback to address?
3. Constraints (time, notation changes)?
4. Update which outputs (slides, web version, both)?"
[Creates spec with MUST/SHOULD/MAY priorities]
[You approve]
[Then enters plan mode]
```
### Data Analysis
```
You: "Analyze the survey data"
Claude: "Let me clarify:
1. What's the research question?
2. Expected output format (tables, plots, report)?
3. Any replication targets or baselines?
4. Computational constraints (runtime, memory)?"
```
### Code Refactoring
```
You: "Clean up my analysis scripts"
Claude: "Let me clarify:
1. What's the pain point (speed, readability, bugs)?
2. Breaking changes okay or preserve compatibility?
3. Testing infrastructure available?
4. Refactor scope (one file, whole project)?"
```
:::
After 3-5 questions, Claude creates a specification document in `quality_reports/specs/` with:
- **MUST** have (non-negotiable requirements)
- **SHOULD** have (preferred features)
- **MAY** have (optional enhancements)
- **Clarity status** (CLEAR/ASSUMED/BLOCKED for each aspect)
You approve the spec, then Claude plans implementation. This reduces mid-plan pivots by 30-50%.
**Template:** `templates/requirements-spec.md`
---
# The System in Action {#sec-system}
With setup covered, here is what the system actually *does*. This section walks through the three core mechanisms that make the workflow powerful: specialized agents, adversarial QA, and automatic quality scoring.
## Why Specialized Agents Beat One-Size-Fits-All
Consider proofreading a 140-slide lecture deck. You could ask Claude:
> "Review these slides for grammar, layout, math correctness, code quality, and pedagogical flow."
Claude will skim everything and catch some issues. But it will miss:
- The equation on slide 42 where a subscript changed from $m_t^{d=0}$ to $m_t^0$
- The TikZ diagram where two labels overlap at presentation resolution
- The R script that uses `k=10` covariates but the slide says `k=5`
Now compare with specialized agents:
| Agent | Focus | What It Catches |
|-------|-------|-----------------|
| `proofreader` | Grammar only | "principle" vs "principal" |
| `slide-auditor` | Layout only | Text overflow on slide 37 |
| `pedagogy-reviewer` | Flow only | Missing framing sentence before Theorem 3.1 |
| `r-reviewer` | Code only | Missing `set.seed()` |
| `domain-reviewer` | Substance | Slide says 10,000 MC reps, code runs 1,000 |
Each agent reads the same file but examines a different dimension with full attention. The `/slide-excellence` skill runs them all in parallel.
## The Adversarial Pattern: Critic + Fixer {#the-adversarial-pattern-critic-fixer}
The single most powerful pattern in this system is the **adversarial QA loop**:
```
+------------------+
| quarto-critic | "I found 12 issues. 3 Critical."
| (READ-ONLY) |
+--------+---------+
|
+----v----+
| Verdict |
+----+----+
/ \
APPROVED NEEDS WORK
| |
Done +----v---------+
| quarto-fixer | "Fixed 12/12 issues."
| (READ-WRITE) |
+----+---------+
|
+----v----------+
| quarto-critic | "Re-audit: 2 remaining."
| (Round 2) |
+----+----------+
|
... (up to 5 rounds)
```
**Why it works:** The critic can't fix files (read-only), so it has no incentive to downplay issues. The fixer can't approve itself (the critic re-audits). This prevents the common failure of Claude saying "looks good" about its own work.
::: {.callout-tip}
## Real Example
In Econ 730 Lecture 6, the critic caught that the Quarto version used `\cdots` (a placeholder) where the Beamer version had the full Hajek weight formula. The fixer replaced it. On re-audit, the critic found 8 more instances of missing `(X)` arguments on outcome models. After 4 rounds, the Quarto slides matched the Beamer source exactly.
:::
## The Orchestrator: Coordinating Agents Automatically
Individual agents are specialists. Skills like `/slide-excellence` and `/qa-quarto` coordinate a few agents for specific tasks. But in day-to-day work, you should not have to think about which agents to run. That is the orchestrator's job.
The **orchestrator protocol** (`.claude/rules/orchestrator-protocol.md`) is an auto-loaded rule that activates after any plan is approved. It implements the plan, runs the verifier, selects review agents based on file types, applies fixes, re-verifies, and scores against quality gates. It loops until the score meets threshold or max rounds are exhausted.
You never invoke the orchestrator manually --- it is the default mode of operation for any non-trivial task. Skills remain available for standalone use (e.g., `/proofread` for a quick grammar check), but the orchestrator handles the full lifecycle automatically. See [Pattern 2](#pattern-2-contractor-mode-orchestrator) for the complete workflow.
## Quality Scoring: The 80/90/95 System {#sec-quality}
The [quality-gates rule](#sec-blocks) (`quality-gates.md`) defines scoring thresholds that the orchestrator enforces automatically. Every file gets a quality score from 0 to 100:
| Score | Threshold | Meaning | Action |
|-------|-----------|---------|--------|
| **80+** | Commit | Safe to save progress | `git commit` allowed |
| **90+** | PR | Ready for deployment | `gh pr create` encouraged |
| **95+** | Excellence | Exceptional quality | Aspirational target |
| **< 80** | Blocked | Critical issues exist | Must fix before committing |
### How Scores Are Calculated
Points are deducted for issues:
| Issue | Deduction | Why Critical |
|-------|-----------|-------------|
| Equation overflow | -20 | Math cut off = unusable |
| Broken citation | -15 | Academic integrity |
| Equation typo | -10 | Teaches wrong content |
| Text overflow | -5 | Content cut off |
| Label overlap | -5 | Diagram illegible |
| Notation inconsistency | -3 | Student confusion |
### Mandatory Verification
The verification protocol (`.claude/rules/verification-protocol.md`) requires that Claude compile, render, or otherwise verify every output before reporting a task as complete. The orchestrator enforces this as an explicit step in its loop (Step 2: VERIFY). This means Claude **cannot** say "done" without actually checking the output.
::: {.callout-warning}
## Don't Skip Verification
In Econ 730, verification caught unverified TikZ diagrams that would have deployed with overlapping labels, broken SVGs in Quarto slides that wouldn't display, and R scripts with missing intercept terms that produced silently wrong estimates.
:::
## Creating Your Own Domain Reviewer {#sec-domain-reviewer}
The template includes `domain-reviewer.md` --- a skeleton for building a substance reviewer specific to your field. For example, in Econ 730, the domain reviewer caught that a slide claimed "$\hat{\beta}$ is consistent under parallel trends" while the cited paper (Callaway and Sant'Anna, 2021) actually requires a *conditional* parallel trends assumption --- a subtle but critical distinction that no grammar or layout checker would flag.
### The 5-Lens Framework
Every domain can benefit from these five review lenses:
| Lens | What It Checks | Example (Economics) | Example (Physics) |
|------|---------------|--------------------|--------------------|
| **Assumption Audit** | Are stated assumptions sufficient? | Is overlap required for ATT? | Is the adiabatic approximation valid here? |
| **Derivation Check** | Does the math check out? | Do decomposition terms sum? | Do the units balance? |
| **Citation Fidelity** | Do slides match cited papers? | Is the theorem from the right paper? | Is the experimental setup correctly described? |
| **Code-Theory Alignment** | Does code implement the formula? | R script matches the slide equation? | Simulation parameters match theory? |
| **Logic Chain** | Does the reasoning flow? | Can a PhD student follow backwards? | Are prerequisites established? |
To customize, open `.claude/agents/domain-reviewer.md` and fill in:
1. Your domain's common assumption types
2. Typical derivation patterns to verify
3. Key papers and their correct attributions
4. Code-theory alignment checks for your tools
5. Logic chain requirements for your audience
---
# The Building Blocks {#sec-blocks}
Understanding the configuration layers helps you customize the workflow and debug when things go wrong. Claude Code's power comes from five configuration layers that work together --- think of them as the operating system for your academic project.
## CLAUDE.md --- Your Project's Constitution
`CLAUDE.md` is the single most important file. Claude reads it at the start of every session. But here is the critical insight: **Claude reliably follows about 100--150 custom instructions.** Your system prompt already uses ~50, leaving ~100--150 for your project. CLAUDE.md and always-on rules share this budget.
This means CLAUDE.md should be a **slim constitution** --- short directives and pointers, not comprehensive documentation. Aim for ~120 lines:
- **Core principles** --- 4--5 bullets (plan-first, verify-after, quality gates, LEARN tags)
- **Folder structure** --- where everything lives
- **Commands** --- compilation, deployment, key tools
- **Customization tables** --- Beamer environments, CSS classes
- **Current state** --- what's done, what's in progress
- **Skill quick reference** --- table of available slash commands
Move everything else into `.claude/rules/` files (with path-scoping so they only load when relevant).
```markdown
# CLAUDE.MD --- My Course Development
**Project:** Econ 730 --- Causal Panel Data
**Institution:** Emory University
## Core Principles
1. **Plan-first** — enter plan mode before non-trivial tasks
2. **Verify-after** — compile/render and check before reporting done
3. **Quality gates** — 80 to commit, 90 for PR, 95 for excellence
4. **LEARN tags** — persist corrections in MEMORY.md
5. **Single source of truth** — Beamer is authoritative; derive, don't duplicate
## Quick Reference
| Command | What It Does |
|---------|-------------|
| `/compile-latex [file]` | 3-pass XeLaTeX compilation |
| `/proofread [file]` | Grammar/typo review |
| `/deploy [Lecture]` | Render and deploy to GitHub Pages |
```
::: {.callout-important}
## Keep It Lean
CLAUDE.md loads every session. If it exceeds ~150 lines, Claude starts ignoring rules silently. Put detailed standards in path-scoped rules (`.claude/rules/`) instead --- they only load when Claude works on matching files, so they don't compete for attention.
:::
## Rules --- Domain Knowledge That Auto-Loads {#rules---domain-knowledge-that-auto-loads}
Rules are markdown files in `.claude/rules/` that Claude loads automatically. They encode your project's standards. The key design principle is **path-scoping**: rules with a `paths:` YAML frontmatter only load when Claude works on matching files.
**Always-on rules** (no `paths:` frontmatter) load every session. Keep these few and focused:
```
.claude/rules/
├── plan-first-workflow.md # ~83 lines — plan before you build
├── orchestrator-protocol.md # ~42 lines — contractor mode loop
├── session-logging.md # ~23 lines — three logging triggers
└── meta-governance.md # ~251 lines — template vs working project
```
**Path-scoped rules** load only when relevant:
```
.claude/rules/
├── r-code-conventions.md # paths: ["**/*.R"] — R standards
├── quality-gates.md # paths: ["*.tex", "*.qmd", "*.R"] — scoring
├── verification-protocol.md # paths: ["*.tex", "*.qmd", "docs/"] — verify before done
├── replication-protocol.md # paths: ["scripts/**/*.R"] — replicate first
├── exploration-folder-protocol.md # paths: ["explorations/**"] — sandbox rules
├── orchestrator-research.md # paths: ["scripts/**/*.R", "explorations/**"] — simple loop
└── ...14 path-scoped rules total
```
The first three always-on rules total ~148 lines of actionable instructions. `meta-governance` is a reference document for the template's dual nature (working project vs. public template) and loads passively. Path-scoped rules add rich, domain-specific guidance exactly when Claude needs it.
**Sync vs. translate:** The `beamer-quarto-sync` rule handles incremental edits --- fix a typo in Beamer, same fix goes to Quarto. The `/translate-to-quarto` skill is for full initial translation of a new lecture. Translate once, sync thereafter.
**Why rules matter:** Without them, Claude will use generic defaults. With them, Claude follows *your* standards consistently across sessions.
### Example: Path-Scoped R Code Conventions Rule
```yaml
---
paths:
- "**/*.R"
- "Figures/**/*.R"
- "scripts/**/*.R"
---
```
```markdown
# R Code Standards
## Reproducibility
- set.seed() called ONCE at top (YYYYMMDD format)
- All packages loaded at top via library()
- All paths relative to repository root
## Visual Identity
primary_blue <- "#012169"
primary_gold <- "#f2a900"
```
The `paths:` block means this rule only loads when Claude reads or edits an `.R` file. When Claude works on a `.tex` file, this rule doesn't consume any of the instruction budget.
## Constitutional Governance (Optional)
As your project grows, some decisions become non-negotiable (to maintain quality, reproducibility, or collaboration standards). Others remain flexible.
The `templates/constitutional-governance.md` template helps you distinguish between:
- **Immutable principles** (Articles I-V): Non-negotiable rules that ensure consistency
- **User preferences**: Flexible patterns that can vary by context
### Example Articles You Might Define
- **Article I: Primary Artifact** — Which file is authoritative (e.g., `.tex` vs `.qmd`, `.Rmd` vs `.html`, notebook vs script)
- **Article II: Plan-First Threshold** — When to enter plan mode (e.g., >3 files, >30 min, multi-step workflows)
- **Article III: Quality Gate** — Minimum score to commit (e.g., 80/100, all tests passing)
- **Article IV: Verification Standard** — What must pass before commit (e.g., compile, tests, render)
- **Article V: File Organization** — Where different file types live (prevents scattering)
The template includes examples for LaTeX, R, Python, Jupyter, and multi-language workflows.
Use constitutional governance **after** you've established 3-7 recurring patterns that you want to enforce consistently. Don't create it on day one --- let patterns emerge first, then codify them. Skip it for solo projects with evolving standards, or when you prefer case-by-case decisions.
**Template:** `templates/constitutional-governance.md`
## Skills --- Reusable Slash Commands {#skills---reusable-slash-commands}
Skills are multi-step workflows invoked with `/command`. Each skill lives in `.claude/skills/[name]/SKILL.md`:
```yaml
---
name: compile-latex
description: Compile LaTeX with 3-pass XeLaTeX + bibtex
argument-hint: "[filename without .tex extension]"
---
# Steps:
1. cd to Slides/
2. Run xelatex pass 1
3. Run bibtex
4. Run xelatex pass 2
5. Run xelatex pass 3
6. Check for errors
7. Report results
```
**Skills you get in the template:**
| Skill | Purpose | When to Use |
|-------|---------|------------|
| `/compile-latex` | Build PDF from .tex | After any Beamer edit |
| `/deploy` | Render Quarto + sync to docs/ | Before pushing to GitHub Pages |
| `/proofread` | Grammar and consistency check | Before every commit |
| `/qa-quarto` | Adversarial Quarto QA | After translating Beamer to Quarto |
| `/slide-excellence` | Full multi-agent review | Before major milestones |
| `/create-lecture` | New lecture from scratch | Starting a new topic |
| `/commit` | Stage, commit, PR, merge | After any completed task |
::: {.callout-note}
## Built-In Skills
Claude Code ships with built-in skills beyond this template's 22: `/batch` orchestrates parallel refactoring across your codebase (using git worktrees for isolation), `/simplify` runs 3-agent code review and applies fixes, and `/debug` helps troubleshoot sessions. These complement the academic skills above.
:::
## Agents --- Specialized Reviewers {#agents---specialized-reviewers}
Agents are the real power of this system. Each agent is an expert in one dimension of quality:
```
.claude/agents/
+-- proofreader.md # Grammar, typos, consistency
+-- slide-auditor.md # Visual layout, overflow, spacing
+-- pedagogy-reviewer.md # Narrative arc, notation clarity, pacing
+-- r-reviewer.md # R code quality and reproducibility
+-- tikz-reviewer.md # TikZ diagram visual quality
+-- quarto-critic.md # Adversarial Quarto vs Beamer comparison
+-- quarto-fixer.md # Applies critic's fixes
+-- beamer-translator.md # Beamer -> Quarto translation
+-- verifier.md # Task completion verification
+-- domain-reviewer.md # YOUR domain-specific substance review
```
### Agent Anatomy
Each agent file has YAML frontmatter + detailed instructions:
```markdown
---
name: proofreader
description: Reviews slides for grammar, typos, and consistency
---
# Proofreader Agent
## Role
You are an expert academic proofreader reviewing lecture slides.
## What to Check
1. Grammar and spelling errors
2. Inconsistent notation
3. Missing or broken citations
4. Content overflow (text exceeding slide bounds)
## Report Format
Save findings to: quality_reports/[FILENAME]_report.md
## Severity Levels
- **Critical:** Math errors, broken citations
- **Major:** Grammar errors, overflow
- **Minor:** Style inconsistencies
```
::: {.callout-note}
## Why Specialized Agents?
A single Claude prompt trying to check grammar, layout, math, and code simultaneously will do a mediocre job at all of them. Specialized agents focus on one dimension and do it thoroughly. The `/slide-excellence` skill runs them all in parallel, then synthesizes results.
Claude Code also offers experimental **Agent Teams** --- multiple independent sessions that coordinate, share findings, and challenge each other's approaches. This is a research preview feature; the orchestrator + subagent pattern described here is more mature for academic workflows.
:::
### Multi-Model Strategy: Cost vs. Quality {#multi-model-strategy-cost-vs-quality}
Not all agents need the same model. Each agent file has a `model:` field in its YAML frontmatter. By default, all agents use `model: inherit` (they use whatever model your main session runs). But you can customize this to optimize cost:
| Task Type | Recommended Model | Why | Examples |
|-----------|-------------------|-----|----------|
| Complex translation | `model: opus` | Needs deep understanding of both formats | beamer-translator, quarto-critic |
| Fast, constrained work | `model: sonnet` | Speed matters more than depth | r-reviewer, quarto-fixer |
| Default | `model: inherit` | Uses whatever the main session runs | proofreader, slide-auditor |
**The principle:** Use Opus for tasks that require holding two large documents in mind simultaneously (translation, adversarial comparison). Use Sonnet for tasks with clear, bounded scope (fix these 12 issues, check this R script). Let everything else inherit.
To change an agent's model, edit its YAML frontmatter:
```yaml
---
name: quarto-critic
model: opus # was: inherit
---
```
::: {.callout-tip}
## Cost Savings
If you configure model-per-agent, a typical Beamer-to-Quarto translation runs the critic on Opus (2--4 rounds) while the fixer runs on Sonnet (same rounds). This can save roughly 40--60% compared to running everything on Opus, with no quality loss on the fixing step.
:::
### Advanced Agent Configuration {#sec-agent-config}
Beyond model selection, agent definitions support several configuration fields:
| Field | Purpose | Example |
|-------|---------|---------|
| `model` | Force a specific model | `haiku`, `sonnet`, `opus` |
| `maxTurns` | Limit agent iterations | `10` (prevents runaway loops) |
| `isolation` | Run in a git worktree | `worktree` (see [Pattern 12](#pattern-12-branch-isolation)) |
| `effort` | Override reasoning effort | `high` |
| `permissionMode` | Restrict permissions | `plan` (read-only agent) |
| `tools` | Whitelist specific tools | `["Read", "Grep", "Glob"]` |
| `disallowedTools` | Blacklist specific tools | `["Write", "Edit"]` |
| `skills` | Make specific skills available | `["compile-latex"]` |
| `background` | Run concurrently | `true` |
Example: a read-only proofreader that can't edit files:
```yaml
---
name: proofreader
model: sonnet
maxTurns: 15
tools: ["Read", "Grep", "Glob"]
---
```
Use `maxTurns` to prevent review agents from looping indefinitely, and `tools` to enforce read-only behavior for agents that should only produce reports.
## Settings --- Permissions and Hooks {#settings---permissions-and-hooks}
`.claude/settings.json` controls what Claude is allowed to do. Here is a simplified excerpt --- the template includes additional permission entries for git, GitHub CLI, PDF tools, and more:
```json
{
"permissions": {
"allow": [
"Bash(git status *)",
"Bash(xelatex *)",
"Bash(Rscript *)",
"Bash(quarto render *)",
"Bash(./scripts/sync_to_docs.sh *)"
]
},
"hooks": {
"Stop": [
{
"hooks": [{
"type": "command",
"command": "python3 \"$CLAUDE_PROJECT_DIR\"/.claude/hooks/log-reminder.py",
"timeout": 10
}]
}
]
}
}
```
**Permission modes.** Claude Code has five permission modes that control how much autonomy Claude gets:
| Mode | Internal Name | Behavior | When to Use |
|------|--------------|----------|-------------|
| **Normal** | `default` | Asks before risky actions | Day-to-day work --- approve each edit |
| **Auto-accept edits** | `acceptEdits` | Auto-approves file edits | Trusted batch operations (rename across 20 files) |
| **Don't ask** | `dontAsk` | Auto-denies tools unless pre-approved in allowlist | Restricted environments where only allowlisted tools run |
| **Plan** | `plan` | Read-only --- no edits allowed | Exploring code, reviewing before acting |
| **Bypass** | `bypassPermissions` | Skips all permission prompts | CI/CD pipelines, headless scripts (still blocks `.git/`) |
Set via CLI flag (`claude --permission-mode plan`), the `/config` command, or `permissions.defaultMode` in `settings.json`.
::: {.callout-warning}
## Plans Directory
By default, Claude saves plans to a global directory (`~/.claude/plans/`), not your project. To keep plans with your project (and in git), add this to `.claude/settings.json`:
```json
{
"plansDirectory": "quality_reports/plans"
}
```
:::
The **Stop hook** runs a fast Python script after every response. No LLM call, no latency. It checks whether the session log is current and reminds Claude to update it if not. Behavioral rules like verification and Beamer-Quarto sync are enforced via auto-loaded rules in `.claude/rules/`, which is the right tool for nuanced judgment that Claude can evaluate in-context.
## Effort Levels --- Cost vs. Thoroughness {#sec-effort}
Claude Code lets you control how deeply it reasons about each task. Higher effort means more "thinking tokens" and better results --- but higher cost.
| Level | Thinking Budget | Academic Use Case | Relative Cost |
|-------|----------------|-------------------|---------------|
| `low` | Minimal | Quick formatting, grep, file renames | $ |
| `medium` | Standard | Most tasks (default for Opus) | $$ |
| `high` | ~10k tokens | Complex derivations, paper reviews | $$$ |
| `max` / ultrathink | ~32k tokens | Deep proofs, multi-step analysis | $$$$ |
**How to set effort:**
- **Per-session:** Type `/effort high` in the chat
- **Per-skill:** Add `effort: high` to skill frontmatter (see [Skill Frontmatter Reference](#skill-frontmatter))
- **Keyboard toggle:** `Option+T` (Mac) / `Alt+T` (Windows/Linux) toggles extended thinking
- **In prompts:** Include "ultrathink" in your prompt to enable extended thinking for that turn. Note: phrases like "think hard" are treated as regular instructions and do *not* allocate extra thinking tokens
- **Environment variable:** `CLAUDE_CODE_EFFORT_LEVEL=high` for all sessions
::: {.callout-tip}
## Composing Effort with Model Choice
Effort levels compose with model selection for fine-grained cost control. For example, `Haiku + high effort` costs less than `Opus + low effort` but may produce comparable results for bounded tasks like formatting. Use `Opus + max` only for tasks that genuinely require deep multi-step reasoning --- complex proofs, intricate data pipeline debugging, or comprehensive paper critique.
:::
## Memory --- Cross-Session Persistence {#memory---cross-session-persistence}
Claude Code has an auto-memory system at `~/.claude/projects/[project]/memory/MEMORY.md`. This file persists across sessions and is loaded into every conversation.
Use it for:
- Key project facts that never change
- Corrections you don't want repeated (`[LEARN:tag]` format)
- Current plan status
```markdown
# Auto Memory
## Key Facts
- Project uses XeLaTeX, not pdflatex
- Bibliography file: Bibliography_base.bib
## Corrections Log
- [LEARN:r-code] Package X drops obs silently when covariate is missing
- [LEARN:citation] Post-LASSO is Belloni (2013), NOT Belloni (2014)
- [LEARN:workflow] Every Beamer edit must auto-sync to Quarto
```
### Plans --- Compression-Resistant Task Memory
While MEMORY.md stores long-lived project facts, **plans** store task-specific strategy. Every non-trivial plan is saved to `quality_reports/plans/` with a timestamp. This means:
- Plans survive auto-compression (they are on disk, not just in context)
- Plans survive session boundaries (readable in any future session)
- Plans create an audit trail of design decisions
See Pattern 1 in [Workflow Patterns](#sec-patterns) for the full protocol.
### Session Logs --- Why-Not-Just-What History (with Automated Reminders)
Git commits record what changed, but not *why*. Session logs fill this gap. Claude writes to `quality_reports/session_logs/` at three points: right after plan approval, incrementally during implementation (as decisions happen), and at session end. This means the log captures reasoning *as it happens*, before auto-compression can discard it.
Because relying on instructions alone is fragile (Claude forgets during long sessions), a **Stop hook** (`.claude/hooks/log-reminder.py`) fires after every response. It tracks how many responses have passed since the session log was last updated. After a threshold, it blocks Claude from stopping until the log is current. This turns a best practice into an enforced behavior.
New sessions can read these logs to understand not just the current state of the project, but the reasoning behind it. See Pattern 1 in [Workflow Patterns](#sec-patterns) for the full protocol.
### How It All Fits Together
With CLAUDE.md, MEMORY.md, plans, and session logs, the system has four distinct memory layers. Here is what each one does and when it matters:
| Layer | File | Survives Compression? | Updated When | Purpose |
|-------|------|----------------------|--------------|---------|
| Project context | `CLAUDE.md` | Yes (on disk) | Rarely | Project rules, folder structure, commands |
| Corrections | `MEMORY.md` | Yes (on disk) | On `[LEARN]` tag | Prevent repeating past mistakes |
| Task strategy | `quality_reports/plans/` | Yes (on disk) | Once per task | Plan survives planning-to-implementation handoff |
| Decision reasoning | `quality_reports/session_logs/` | Yes (on disk) | Incrementally | Record *why* decisions were made |
| Conversation | Claude's context window | **No** (compressed) | Every response | Current working memory |
The first four layers are your safety net. Anything written to disk survives indefinitely. The conversation context is ephemeral --- auto-compression will eventually discard details. The workflow's design ensures that anything worth keeping is written to one of the four persistent layers before compression can erase it.
### Hooks --- Automated Enforcement {#hooks---automated-enforcement}
The session log reminder above is one example of a broader pattern: using **hooks** to enforce rules that Claude might otherwise forget during long sessions. Rules live in context and can be compressed away. Hooks live in `.claude/settings.json` and fire every time, regardless of context state.
The template includes hooks for logging, notifications, file protection, and context survival:
| Hook | Event | What It Does |
|------|-------|-------------|
| Session log reminder | `Stop` | Reminds about session logs after every response |
| Desktop notification | `Notification` | Desktop alert when Claude needs attention (macOS/Linux) |
| File protection | `PreToolUse[Edit|Write]` | Blocks accidental edits to bibliography and settings |
| Context state capture | `PreCompact` | Saves plan state before auto-compaction |
| Context restoration | `SessionStart[compact|resume]` | Restores context after compaction or resume |
| Context monitor | `PostToolUse[Bash|Task]` | Progressive warnings at 40%/55%/65%/80%/90% context |
| Verification reminder | `PostToolUse[Write|Edit]` | Reminds to compile/render before marking done |
Verification and Beamer-Quarto sync are enforced via auto-loaded rules, which are the right tool for nuanced judgment. Hooks are reserved for enforcement that *must* survive context compression.
**Hook handler types.** The examples above use `command` hooks (shell scripts), but Claude Code supports four handler types:
| Type | How It Works | Best For |
|------|-------------|----------|
| **`command`** | Runs a shell script. Exit 0 = allow, exit 2 = block (PreToolUse only) | File protection, state capture, notifications |
| **`prompt`** | Injects text into Claude's conversation --- no script needed | Soft reminders: "Check that equations compile before saving" |
| **`http`** | POSTs to an external endpoint | CI/CD integration, logging to external services |
| **`agent`** | Spawns a subagent that can use tools to verify conditions | Complex validation requiring multi-step checks |
**PreToolUse input modification.** PreToolUse hooks can now **modify tool inputs**, not just block or allow. Your hook script can return modified JSON to rewrite parameters before the tool executes --- for example, auto-correcting file paths or enforcing naming conventions.
::: {.callout-tip}
## Hook Design Principle
Use **command hooks** for fast, mechanical checks (file exists? counter threshold?). Use **prompt hooks** for soft guidance that doesn't need a script. Use **rules** for nuanced judgment (did Claude verify correctly?). Avoid prompt hooks that fire on high-frequency events --- the injected text adds up in context.
:::
### Context Survival System (Advanced) {#context-survival-system-advanced}
When context compaction happens, Claude loses working memory. The **context survival system** ensures you can recover seamlessly.
#### How It Works
Two hooks work together to preserve and restore state:
```
Session running → context fills up → PreCompact fires
↓
pre-compact.py saves:
• Active plan path
• Current task
• Recent decisions
↓
Auto-compaction happens
↓
SessionStart(compact|resume) fires
↓
post-compact-restore.py:
• Reads saved state
• Prints context summary
• Claude knows where it left off
```
#### What Gets Saved
| State | Location | Purpose |
|-------|----------|---------|
| Plan path | Session cache | So Claude can read the plan file |
| Current task | Session cache | First unchecked `- [ ]` item |
| Recent decisions | Session cache | Last 3 decision-like entries from session log |
| Compaction note | Session log | Timestamp marker for reference |
#### Context Monitoring
The `context-monitor.py` hook tracks approximate context usage and provides progressive warnings:
| Threshold | Message | Purpose |
|-----------|---------|---------|
| 40%, 55%, 65% | Suggest `/learn` | Capture non-obvious discoveries before compaction |
| 80% | Info message | Auto-compact approaching, no rush |
| 90% | Caution | Complete current task with full quality |
Use `/context-status` to check current session health at any time.
Note: the monitor uses tool call count as a proxy for context usage, so warnings may appear earlier or later than actual compaction.
#### Recovery After Compaction
If compaction happens mid-task, Claude will automatically see:
1. **Restoration message** --- what plan was active, what task was in progress
2. **Recovery actions** --- read the plan, check git status, continue
You can also manually point Claude to the right context:
> "We just had compaction. Read `quality_reports/plans/2026-02-06_translate-lecture5.md` and continue from where we left off."
::: {.callout-note}
## 2026 Feature Coverage