[Non-record] MLA + SmearGate + BigramHash + SWA — pre-quant 1.2838 bpb#354
Open
Skrisps26 wants to merge 4 commits intoopenai:mainfrom
Open
[Non-record] MLA + SmearGate + BigramHash + SWA — pre-quant 1.2838 bpb#354Skrisps26 wants to merge 4 commits intoopenai:mainfrom
Skrisps26 wants to merge 4 commits intoopenai:mainfrom
Conversation
Contributor
There was a problem hiding this comment.
Pull request overview
Adds a new non-record 16MB submission record capturing an experiment that stacks MLA attention + SmearGate MLP + BigramHash embeddings + SWA and includes the training/eval code snapshot plus reported metrics/artifacts.
Changes:
- Adds a self-contained
train_gpt.pyimplementing MLA/SmearGate/BigramHash, SWA, Muon optimizer, and mixed int5/int6(+fp16) quantized serialization. - Adds record metadata (
submission.json) and a writeup (README.md) describing results and reproduction. - Adds a training log artifact (currently UUID-named).
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| records/track_non_record_16mb/2026-03-21_MLA_SmearGate_BigramHash/train_gpt.py | New training script implementing the stacked architecture + quantized artifact roundtrip. |
| records/track_non_record_16mb/2026-03-21_MLA_SmearGate_BigramHash/submission.json | Submission metrics/size metadata for the run. |
| records/track_non_record_16mb/2026-03-21_MLA_SmearGate_BigramHash/README.md | Human-readable summary of configuration, results, and run command. |
| records/track_non_record_16mb/2026-03-21_MLA_SmearGate_BigramHash/0a10b225-50af-46ef-8fb9-5183fe30fb70.txt | Captured training output/log for the run (currently not named train.log). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
records/track_non_record_16mb/2026-03-21_MLA_SmearGate_BigramHash/train_gpt.py
Outdated
Show resolved
Hide resolved
records/track_non_record_16mb/2026-03-21_MLA_SmearGate_BigramHash/README.md
Outdated
Show resolved
Hide resolved
records/track_non_record_16mb/2026-03-21_MLA_SmearGate_BigramHash/submission.json
Outdated
Show resolved
Hide resolved
records/track_non_record_16mb/2026-03-21_MLA_SmearGate_BigramHash/train_gpt.py
Show resolved
Hide resolved
…ash/train_gpt.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…ash/README.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…ash/submission.json Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
MLA + SmearGate + BigramHash + SWA
Summary
Non-record submission demonstrating a stacked architecture combining:
Results
Key Finding
MLA attention, while parameter-efficient, adds significant compute overhead
per step (~83ms vs ~43ms for the baseline). In a fixed 10-minute window on
8xH100s this reduces token throughput from ~7.2B (baseline) to ~3.7B —
roughly half the training data. The pre-quantization bpb of 1.2838 suggests
the architecture itself is competitive; the bottleneck is throughput, not
capacity.
Replacing MLA with standard GQA would recover the full step budget (~11,500
steps at ~52ms/step) and likely push final bpb below 1.15.
Architecture
Run Command
Files
train_gpt.py— training scripttrain.log— full training logsubmission.json— metadata