This comprehensive training series teaches neural network fundamentals through an intuitive, story-driven approach. Whether you're a complete beginner or an experienced developer wanting to understand what's happening "under the hood," this series will guide you from basic matrix operations to building a complete, working neural network, all from scratch using only Python and NumPy. No frameworks, No Black Boxes; Just you, the concepts and the code.
"Your brain doesn't make decisions with a single thought—it uses networks of neurons working together, like a committee of experts deliberating on a decision."
Neural Network Fundamentals is a 10-part Jupyter notebook series that takes you on a journey from understanding how computers represent images as numbers, all the way to building, training, and evaluating a complete neural network.
Unlike typical tutorials that jump into frameworks and abstract concepts, this series:
- Starts from absolute basics - We begin with matrices and build up every concept step by step
- Uses a consistent analogy - The "Brain's Decision Committee" theme helps you understand every concept intuitively
- Follows one problem throughout - We solve the same problem (classifying vertical vs. horizontal lines) from Part 0 to Part 9, showing how each concept gets us closer to the solution
- Shows ALL the math - But explains it in plain English alongside the formulas
- Includes working code - Every concept is implemented in Python so you can see exactly how it works
- Provides interactive experiments - IPyWidgets let you adjust parameters and see results in real-time
| Audience | What You'll Get |
|---|---|
| Complete beginners | A gentle introduction to neural networks with no assumptions about prior ML knowledge |
| Self-learners | A structured curriculum that builds understanding systematically |
| Students | Clear explanations of the math behind neural networks |
| Data scientists | Deep understanding of fundamentals to inform better model design |
| Developers | The "why" behind neural network code, not just the "how" |
| Technical professionals | A bridge from traditional programming to machine learning |
- Basic Python knowledge - Variables, functions, loops, lists
- Some comfort with math - We explain everything, but being comfortable seeing equations helps
- Curiosity about AI/ML - The desire to understand how it really works
"The committee learns to read images as numbers"
- What neural networks are and why they matter
- Matrices: the language of data
- Matrix operations: addition, multiplication, dot product
- Feature scaling: why we normalize data
- Our mission: classify vertical vs. horizontal lines
"One brave committee member steps up to try first"
- The biological inspiration for artificial neurons
- Anatomy of a neuron: inputs, weights, bias
- The weighted sum: combining evidence
- Building our first
SimpleNeuronclass
"The member learns different ways to cast their vote"
- Why we need activation functions (non-linearity!)
- Step, Sigmoid, Tanh, ReLU, and Softmax
- When to use which activation
- The Dead ReLU problem and solutions
"The untrained member makes random guesses"
- The complete single-layer neural network
- Forward pass: how predictions are made
- First (bad) predictions with random weights
- Why untrained networks guess randomly
"The member reflects on errors and adjusts"
- Loss functions: measuring how wrong we are (MSE, BCE)
- Gradient descent: rolling downhill to find better weights
- The learning rate: how fast to adjust
- Backpropagation: tracing blame for errors
- The complete training loop
"After training, the member is now skilled"
- Training vs. inference (learning vs. using)
- Accuracy, precision, recall, F1 score
- The confusion matrix: understanding errors
- Saliency: what did the network learn?
- Train/test splits: measuring true performance
Part 7: Hidden Layers (part_7_hidden_layers.ipynb)
"One expert isn't enough - we need a full committee"
- The XOR problem: why single neurons aren't enough
- Linear separability and decision boundaries
- Hidden layers: specialists working together
- The Multi-Layer Perceptron (MLP)
- Universal Approximation Theorem
"The committee faces challenges as it grows"
- Overfitting: memorizing instead of learning
- Bias-variance tradeoff
- Vanishing gradients: lost feedback
- Exploding gradients: chaotic training
- Solutions: early stopping, regularization, dropout
"The complete, trained committee works in harmony"
- Complete
NeuralNetworkclass with all concepts - Full data pipeline with train/val/test splits
- Training with early stopping
- Complete evaluation and visualization
- Interactive hyperparameter dashboard
"What other problems can committees solve?"
- Beyond our simple network: CNNs, RNNs, Transformers
- Moving to frameworks: NumPy → PyTorch
- Complete reference and glossary
- Your learning path forward
Throughout this series, we solve one consistent problem: teaching a neural network to distinguish between vertical and horizontal lines in a 3×3 pixel image.
VERTICAL LINE (Label: 1) HORIZONTAL LINE (Label: 0)
┌─────┬─────┬─────┐ ┌─────┬─────┬─────┐
│ 0 │ 1 │ 0 │ │ 0 │ 0 │ 0 │
├─────┼─────┼─────┤ ├─────┼─────┼─────┤
│ 0 │ 1 │ 0 │ │ 1 │ 1 │ 1 │
├─────┼─────┼─────┤ ├─────┼─────┼─────┤
│ 0 │ 1 │ 0 │ │ 0 │ 0 │ 0 │
└─────┴─────┴─────┘ └─────┴─────┴─────┘
- Visually intuitive - Anyone can see the difference
- Mathematically simple - 9 inputs, manageable by hand
- Directly relevant - This IS how real image recognition starts
- Perfect for understanding - Easy to visualize what the network learns
Every technical concept maps to our committee of brain experts:
| Technical Concept | Committee Analogy |
|---|---|
| Neural Network | The Brain's Decision Committee |
| Input Data | Evidence/documents to review |
| Weights | How strongly a member values each piece of evidence |
| Bias | Personal threshold ("I need THIS much to say yes") |
| Activation Function | The voting method |
| Loss Function | How wrong the committee's decision was |
| Gradient Descent | Rolling downhill to find the best solution |
| Backpropagation | Tracing blame back through the committee |
| Hidden Layer | Sub-committee of specialists |
| Overfitting | Memorizing specific cases instead of learning patterns |
| Vanishing Gradient | Whispered feedback lost through too many layers |
This series was built with these principles in mind:
Each section builds on the previous. You never encounter a concept without having learned its foundations first.
The "Brain's Decision Committee" analogy runs from Part 0 to Part 10, helping you connect every new concept to what you already understand.
By using the same V/H line problem throughout, you see how each concept gets us closer to the solution—not just isolated techniques.
Every major concept has charts, diagrams, or heatmaps. Neural networks are visual, and our teaching reflects that.
We show the formulas (you need them to truly understand), but we also explain in plain English what's happening and why.
IPyWidgets let you experiment: change weights, adjust learning rates, add noise—and see the results immediately.
The analogy unfolds like a narrative, keeping you engaged and helping concepts stick.
-
Clone or download this repository
-
Install dependencies:
pip install -r requirements.txt- Launch Jupyter:
jupyter lab
# or
jupyter notebook- Open
neural_network_fundamentals.ipynband start your journey!
numpy>=1.21.0
matplotlib>=3.5.0
ipywidgets>=8.0.0
jupyterlab>=3.0.0 # or notebook>=6.0.0
NeuralNet_Fundamentals/
├── README.md # This file
├── PROJECT_PLAN.md # Detailed project blueprint
├── requirements.txt # Python dependencies
├── part_1_neural_network_fundamentals.ipynb # Part 1: Introduction & Matrices
├── part_2_single_neuron.ipynb # Part 2: The Single Neuron
├── part_3_activation_functions.ipynb # Part 3: Activation Functions
├── part_4_perceptron.ipynb # Part 4: The Perceptron
├── part_5_training.ipynb # Part 5: Training
├── part_6_evaluation.ipynb # Part 6: Evaluation
├── part_7_hidden_layers.ipynb # Part 7: Hidden Layers
├── part_8_deep_learning_challenges.ipynb # Part 8: Deep Learning Challenges
├── part_9_full_implementation.ipynb # Part 9: Full Implementation
└── part_10_whats_next.ipynb # Part 10: What's Next
By the end of this series, you will have built:
-
A complete
NeuralNetworkclass implementing:- Forward propagation through multiple layers
- ReLU and Sigmoid activation functions
- Binary Cross-Entropy loss
- Backpropagation with gradient descent
- Early stopping to prevent overfitting
- Full evaluation metrics
-
A working V/H line classifier that:
- Takes 3×3 pixel images as input
- Processes them through hidden layers
- Outputs a probability of "vertical"
- Achieves 90%+ accuracy on noisy images
-
Deep understanding of:
- How neural networks represent and transform data
- Why training works (gradient descent, backprop)
- How to evaluate and interpret models
- What can go wrong and how to fix it
| Traditional Tutorials | This Series |
|---|---|
| Jump into frameworks | Build from scratch with NumPy |
| Isolated concepts | Connected narrative throughout |
| "Here's the code" | "Here's WHY the code works" |
| Abstract examples | One concrete problem throughout |
| Theory OR practice | Theory AND practice together |
| Framework-dependent | Pure understanding, then frameworks |
Once you complete all 10 parts, you'll be ready to:
- Use frameworks effectively - Understand what PyTorch/TensorFlow do under the hood
- Debug training issues - Recognize overfitting, vanishing gradients, etc.
- Design networks intelligently - Know why you're choosing certain architectures
- Read ML papers - Understand the math and concepts they reference
- Continue learning - Explore CNNs, RNNs, Transformers with solid foundations
Found an error? Have a suggestion? This series aims to be the clearest possible introduction to neural networks. Feedback is welcome!
This project is open source nuder the MIT licence and available for educational use.
This series was designed to make neural networks accessible to everyone, regardless of background. Special thanks to the many researchers and educators whose work made these concepts understandable.
Ready to begin? Open part_1_neural_network_fundamentals.ipynb and meet your committee!
"The Brain's Decision Committee awaits."