MFU Crusades on Bittensor - Miners compete to optimize training code for maximum MFU (Model FLOPs Utilization).
┌───────────────────────────────────────────────────────────────────────────────── ┐
│ Crusades FLOW │
│ │
│ MINER BLOCKCHAIN VALIDATOR │
│ │ │ │
│ │ 1. Host train.py at URL │ │
│ │ (Gist, Pastebin, etc) │ │
│ │ │ │
│ ├──▶ 2. Submit URL ─────────▶ set_reveal_commitment │ │
│ │ (timelock encrypted) │ │
│ │ │ │ │
│ │ │ (wait reveal_blocks) │ │
│ │ ▼ │ │
│ │ 3. Decrypted ◀───────────────── ┤ Read │
│ │ │ │
│ │ 4. Download code │
│ │ from URL │
│ │ │ │
│ │ 5. Runs in Container │
│ │ (X eval runs) │
│ │ │ │
│ │ 6. Calculate MFU │
│ │ (median score) │
│ │ │ │
│ │ 7. Set weights │
│ │
└───────────────────────────────────────────────────────────────────────────────── ┘
# Clone and setup
git clone https://github.com/one-covenant/crusades
cd crusades
curl -LsSf https://astral.sh/uv/install.sh | sh
uv sync
# Create .env (for HuggingFace access)
echo "HF_TOKEN=hf_your_token" > .env# Download model & data for local testing
uv run local_test/setup_benchmark.py
# Test your train.py locally (performance test)
uv run local_test/train.py
# Verify your submission to avoid potential failures during validator checks
uv run local_test/verify.pyHost your train.py at any URL that returns raw code:
- GitHub Gist (recommended - use secret gist for privacy)
- Raw GitHub file (use raw.githubusercontent.com)
- Pastebin or any paste service
- Any HTTP/HTTPS URL
# Submit to mainnet
uv run -m neurons.miner submit "https://gist.github.com/user/gist_id" \
--wallet.name your_wallet \
--wallet.hotkey your_hotkey \
--network finney
# Submit to localnet (testing)
uv run -m neurons.miner submit "https://gist.github.com/user/gist_id" \
--wallet.name your_wallet \
--wallet.hotkey your_hotkey \
--network localParameters: --wallet.name, --wallet.hotkey, --network (finney/test/local)
See docs/Validator.md for detailed validator setup.
Your train.py must implement the inner_steps function. Here's the baseline:
from dataclasses import dataclass
import torch
import torch.nn.functional as F
@dataclass
class InnerStepsResult:
final_logits: torch.Tensor # Must be 3D: (batch, seq_len-1, vocab) - NOT None
total_tokens: int # Total tokens processed across all steps
final_loss: float # Loss value from last training step (must be > 0)
def inner_steps(model, data_iterator, optimizer, num_steps, device):
total_tokens = 0
final_logits = None
final_loss = 0.0
for step in range(num_steps):
batch = next(data_iterator)
batch = batch.to(device)
input_ids = batch[:, :-1] # All tokens except last
labels = batch[:, 1:] # All tokens except first
outputs = model(input_ids)
logits = outputs.logits if hasattr(outputs, "logits") else outputs
loss = F.cross_entropy(
logits.reshape(-1, logits.size(-1)),
labels.reshape(-1),
ignore_index=-100,
)
loss.backward()
optimizer.step()
optimizer.zero_grad(set_to_none=True)
total_tokens += batch.numel()
final_logits = logits.detach().float()
final_loss = loss.item()
return InnerStepsResult(
final_logits=final_logits,
total_tokens=total_tokens,
final_loss=final_loss,
)You MUST:
- Use the provided
optimizerdirectly (calloptimizer.step()andoptimizer.zero_grad()) - Process ALL tokens in each batch (no truncation)
- Return actual
final_logitstensor (notNone) - Train all model parameters (don't freeze layers)
You MUST NOT:
- Access optimizer internals (e.g.,
optimizer.optimizer,optimizer._opt_impl) - Truncate or skip parts of input sequences
- Return
Noneforfinal_logits - Import forbidden modules:
gc,ctypes,subprocess,importlib,os,sys,inspect,pickle,signal,threading,multiprocessing,socket,http,io,ast, etc. - Modify torch backend settings (
cudnn.deterministic,cudnn.benchmark, SDP toggles,set_float32_matmul_precision) - Freeze layers or modify
requires_gradsettings - Report inflated token counts
- Alias the
torchmodule (e.g.,import torch as t) — the security scanner only recognizes the literal nametorchfor allowlisted calls liketorch.compile
Note: The validator skips the
if __name__ == "__main__":block entirely. Modules likepathlibthat are forbidden at the top level can still be imported inside__main__for local testing.
Key settings in hparams/hparams.json:
| Setting | Default | Description |
|---|---|---|
netuid |
3 | Subnet ID |
evaluation_runs |
5 | Runs per submission (median taken) |
eval_steps |
5 | Training steps per evaluation |
benchmark_model_name |
Qwen/Qwen2.5-3B | Model for evaluation |
benchmark_batch_size |
8 | Batch size for evaluation |
Monitor crusades activity in real-time with the terminal dashboard.
# Connect to the official Crusades API
uv run -m crusades.tui --url 69.19.137.219:8080- Leaderboard with MFU scores
- Recent submissions and their status
- MFU history chart
- Validator status
- View submission code (after evaluation)