| title | MiniGPT-from-Scratch |
|---|---|
| emoji | 🚀 |
| colorFrom | indigo |
| colorTo | purple |
| sdk | gradio |
| sdk_version | 5.16.2 |
| app_file | app.py |
| pinned | false |
A from-scratch implementation of a decoder-only Transformer (GPT) language model, built in 30 days. This project is designed as a deep-dive into LLM engineering, following the architecture of GPT-2.
- Custom BPE Tokenizer: Fully trained on the dataset using Hugging Face
tokenizers. - GPT Architecture: Multi-head causal self-attention, GELU MLPs, LayerNorm (Pre-Norm), and residual connections.
- Optimized Training: Supports PyTorch AMP (Mixed Precision), Cosine Decay with Warmup, and weight tying.
- Interactive Demo: Built-in Gradio web app for real-time text generation.
- Evaluation: Integrated perplexity calculation for model benchmarking.
- Language: Python 3.10+
- Deep Learning: PyTorch
- Tokenization: Hugging Face Tokenizers
- Interface: Gradio
- Data: FineWeb-Edu (Sample)
MiniGPT/
├── data/ # Raw and processed datasets
├── notebooks/ # Colab/Kaggle training templates
├── src/
│ ├── datasets/ # Data loading and preprocessing logic
│ ├── model/ # Transformer architecture (GPT, Attention, Blocks)
│ ├── tokenizer/ # BPE training and wrapper
│ ├── train/ # Training loop with AMP and validation
│ └── app.py # Gradio Web Demo
├── checkpoints/ # Saved model weights (.pt)
└── requirements.txt # Project dependencies
git clone https://github.com/mrshibly/MiniGPT-from-Scratch.git
cd MiniGPT-from-Scratch
pip install -r requirements.txtpython src/datasets/download_fineweb.py
python src/datasets/clean_text.py
python src/tokenizer/train_tokenizer.py
python src/datasets/prepare_data.pyTo train locally (CPU/GPU):
python src/train/train.pyNote: For full 50M training, use the Colab Template.
Once you have a checkpoint in checkpoints/ckpt.pt:
python src/app.py| Config | Params | Layers | Heads | d_model |
|---|---|---|---|---|
| Tiny | ~7M | 4 | 4 | 256 |
| Standard | ~50M | 6 | 8 | 512 |
- Dataset: 500MB FineWeb-Edu
- Parameters: 27.54 Million
- Validation Loss: 4.8415
- Perplexity: 126.65
- Status: Successfully generating coherent English-like sentences.
MIT