Measure the coherence of any text corpus in under 2 minutes.
curl -fsSL https://raw.githubusercontent.com/usurobor/tsc/main/install.sh | shTSC uses an LLM to score coherence. Set your provider credentials:
export LLM_PROVIDER=anthropic # or: openai
export LLM_MODEL=claude-sonnet-4-20250514 # or: gpt-4o
export LLM_API_KEY=sk-ant-your-key # your API keyUsing OpenAI instead?
export LLM_PROVIDER=openai
export LLM_MODEL=gpt-4o
export LLM_API_KEY=sk-your-keygit clone https://github.com/usurobor/tsc.git && cd tsc
tsc \
--target spec \
--registry targets/registry.tsc \
--instruction runtime/SELF-MEASURE.md \
--output report.jsonCreate a target manifest (my-target.tsc):
format = "tsc-target/0.1"
name = "my-project"
kind = "aggregate"
description = "My project's documentation surface."
include = [
"docs/**/*.md",
"README.md"
]
exclude = [
"node_modules/**"
]Add it to a registry (my-registry.tsc):
format = "tsc-target-registry/0.1"
default_target = "my-project"
[target.my-project]
manifest = "my-target.tsc"Run it:
tsc \
--target my-project \
--registry my-registry.tsc \
--instruction runtime/SELF-MEASURE.md \
--output report.jsonNote: The
--instructionfile tells the LLM how to score. You can useruntime/SELF-MEASURE.mdfrom this repo as a starting point, or write your own.
The report contains triadic scores:
| Axis | What it measures |
|---|---|
| α (pattern) | Internal structural consistency — does repeated sampling yield stable structure? |
| β (relation) | Alignment between parts — do the pieces fit together? |
| γ (process) | Evolution stability — does the system change consistently? |
C_Σ is the aggregate: (s_α · s_β · s_γ)^(1/3). A score ≥ 0.80 means the corpus holds together as one coherent system.
cat report.json | python3 -m json.tool # pretty-print- Operator manual — configuration, targets, troubleshooting
- Theory — the formal triadic coherence model
- Architecture — how the engine works