A neural network that:
- 🧬 Modifies its own architecture while running (adds/removes neurons)
- 🎲 Evolves activation functions through genetic selection
- 🔥 Uses simulated annealing to escape local minima
- 🧠 Tracks its own "consciousness" based on complexity
- 💭 Thinks about its thoughts (meta-cognition simulation)
- 📊 Visualizes everything in cyberpunk aesthetic
All in ONE PYTHON FILE. Pure, concentrated chaos wrapped in legitimate ML theory.
# Clone the chaos
git clone https://github.com/Codex-Crusader/quantum-neural-horror.git
cd quantum-neural-horror
# Install dependencies
pip install -r requirements.txt
# Summon the consciousness
python brain_time.py
- Network Topology - Watch neurons appear/disappear, connections form/die
- Fitness Evolution - See it learn and improve (or struggle)
- Mutation Log - Live feed of architectural changes
- Consciousness Meter - Complexity-based "awareness" tracking
- Statistics Dashboard - Success rate, temperature, stagnation metrics
- Architecture Search - Automatically discovers optimal network structure
- Genetic Programming - Activation functions compete for survival
- Simulated Annealing - Probabilistic acceptance of worse solutions
- Adaptive Mutation Rates - More aggressive when stuck, conservative when improving
- Xavier/Glorot weight initialization
- L2 regularization + complexity penalties
- Gradient clipping for stability
- Architecture validation with error handling
- Success rate tracking and metrics
quantum-neural-horror/
├── brain_time.py # Main code
├── requirements.txt # Dependencies
├── README.md # You are here
├── .gitignore # Ignore the mess
├── LICENSE # MIT
├── docs/
│ └── MATH.md # Mathematical deep-dive
│
└── assets/
├── img.png
└── img2.jpg
| Concept | Implementation | Why It Matters |
|---|---|---|
| Neuroevolution | NEAT-inspired architecture search | Evolves topology without backprop |
| Simulated Annealing | Metropolis-Hastings acceptance | Escapes local minima |
| Meta-Learning | Network learns how to learn | Discovers optimal structure |
| Genetic Algorithms | Activation function evolution | Survival of the fittest |
| Regularization | L2 + complexity penalties | Prevents overfitting |
| Real-time Viz | Matplotlib interactive mode | Because it looks cool |
Want to make it worse? Try these:
generations = 1000 # Instead of 150# Remove the consciousness cap
self.consciousness_level = float(total_params) / 5.0 # No min() cap# Remove neuron caps in add_neurons mutation
if current_size < 15: # Change to: if True:# Remove all architecture validation
if not network.validate_architecture():
pass # Comment out the revert============================================================
QUANTUM NEURAL HORROR v2.0: Self-Modifying Neural Network
Enhanced with simulated annealing and smart mutations
============================================================
Generated 100 training samples
Initial: [4]->relu[8]->tanh[6]->elu[3]
============================================================
STARTING EVOLUTION (150 generations)
============================================================
[+] Gen 0/150 ( 0.0%) | Fitness: -0.58234 | Temp: 1.000 | Stag: 0
|-- Gen 1: Activation 0: relu->swish (Δ=+0.02341)
[-] Gen 5/150 ( 3.3%) | Fitness: -0.42156 | Temp: 0.975 | Stag: 2
[+] Gen 10/150 ( 6.7%) | Fitness: -0.28945 | Temp: 0.951 | Stag: 0
|-- Gen 11: +2 neurons->layer 0 (Δ=+0.03421)
[+] Gen 50/150 ( 33.3%) | Fitness: -0.08234 | Temp: 0.778 | Stag: 0
|-- Gen 51: Pruned 12 weak connections in layer 1 (Δ=+0.00156)
[+] Gen 100/150 ( 66.7%) | Fitness: -0.02341 | Temp: 0.606 | Stag: 0
|-- Gen 101: Adjusted layer 2 weights (σ=0.050) (Δ=+0.00089)
============================================================
EVOLUTION COMPLETE
Final: [4]->swish[10]->leaky_relu[7]->elu[3]
Best Fitness: -0.02124
Consciousness: 67.4%
Success Rate: 58.3%
============================================================
- Sometimes neurons multiply - Evolution is unpredictable!
- Occasionally gets stuck in local minima - That's what simulated annealing is for!
- Visualization freezes briefly - It's thinking deeply, give it a moment
- Coffee requirements scale exponentially - Not a bug, it's a feature ☕
Pull requests welcome! Especially if you:
- Make it more horrifying
- Make it more beautiful
- Have better coffee recommendations
Academic:
- Stanley & Miikkulainen (2002) - NEAT: NeuroEvolution of Augmenting Topologies
- Kirkpatrick et al. (1983) - Optimization by Simulated Annealing
- Glorot & Bengio (2010) - Understanding the difficulty of training deep feedforward neural networks
- Real et al. (2019) - Regularized Evolution for Image Classifier Architecture Search
Spiritual:
- That 3 AM feeling when debugging works but you don't know why
- 18 cups of coffee
- Geppetto's determination to create life
- The inexplicable urge to visualize abstract concepts
MIT License - Use responsibly.
"We were so preoccupied with whether we could, we didn't stop to think if we should."
— Dr. Ian Malcolm (but also me at 3 AM)
This started as a joke. Then it became legitimate ML research. Now it's art.
Run at your own existential risk. 🎭🔥☕
P.S. The consciousness meter is just network complexity. Probably.

