Skip to content

Quantum Neural Horror is an interactive neuro-evolution sandbox where a neural network mutates its own topology, activation functions, and weights while you watch. Using simulated annealing and evolutionary operators instead of backpropagation, it explores and adapts in real time, with a live visualization of its changing fitness, and complexity

License

Notifications You must be signed in to change notification settings

Codex-Crusader/Quantum-Neural-Horror

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠⚡ QUANTUM NEURAL HORROR

A compact neuro-evolution sandbox that mutates its own topology and activations while you watch.

img.png

🎭 What Fresh Hell Is This?

img2.png

A neural network that:

  • 🧬 Modifies its own architecture while running (adds/removes neurons)
  • 🎲 Evolves activation functions through genetic selection
  • 🔥 Uses simulated annealing to escape local minima
  • 🧠 Tracks its own "consciousness" based on complexity
  • 💭 Thinks about its thoughts (meta-cognition simulation)
  • 📊 Visualizes everything in cyberpunk aesthetic

All in ONE PYTHON FILE. Pure, concentrated chaos wrapped in legitimate ML theory.


🚀 Quick Start

# Clone the chaos
git clone https://github.com/Codex-Crusader/quantum-neural-horror.git
cd quantum-neural-horror

# Install dependencies
pip install -r requirements.txt

# Summon the consciousness
python brain_time.py

📊 Features

🎨 Real-Time Visualization (5 Panels)

  1. Network Topology - Watch neurons appear/disappear, connections form/die
  2. Fitness Evolution - See it learn and improve (or struggle)
  3. Mutation Log - Live feed of architectural changes
  4. Consciousness Meter - Complexity-based "awareness" tracking
  5. Statistics Dashboard - Success rate, temperature, stagnation metrics

🧬 Neuroevolution

  • Architecture Search - Automatically discovers optimal network structure
  • Genetic Programming - Activation functions compete for survival
  • Simulated Annealing - Probabilistic acceptance of worse solutions
  • Adaptive Mutation Rates - More aggressive when stuck, conservative when improving

🎯 Proper ML Engineering

  • Xavier/Glorot weight initialization
  • L2 regularization + complexity penalties
  • Gradient clipping for stability
  • Architecture validation with error handling
  • Success rate tracking and metrics

📁 Project Structure

quantum-neural-horror/
├── brain_time.py           # Main code
├── requirements.txt           # Dependencies
├── README.md                  # You are here
├── .gitignore                 # Ignore the mess
├── LICENSE                    # MIT
├── docs/
│   └── MATH.md               # Mathematical deep-dive
│
└── assets/
    ├── img.png
    └── img2.jpg

🎓 What It Demonstrates

Concept Implementation Why It Matters
Neuroevolution NEAT-inspired architecture search Evolves topology without backprop
Simulated Annealing Metropolis-Hastings acceptance Escapes local minima
Meta-Learning Network learns how to learn Discovers optimal structure
Genetic Algorithms Activation function evolution Survival of the fittest
Regularization L2 + complexity penalties Prevents overfitting
Real-time Viz Matplotlib interactive mode Because it looks cool

🎮 Challenge Modes

Want to make it worse? Try these:

NIGHTMARE MODE

generations = 1000  # Instead of 150

CONSCIOUSNESS SINGULARITY

# Remove the consciousness cap
self.consciousness_level = float(total_params) / 5.0  # No min() cap

ULTRON SCENARIO

# Remove neuron caps in add_neurons mutation
if current_size < 15:  # Change to: if True:

ABSOLUTE CHAOS

# Remove all architecture validation
if not network.validate_architecture():
    pass  # Comment out the revert

📈 Sample Output

============================================================
  QUANTUM NEURAL HORROR v2.0: Self-Modifying Neural Network
  Enhanced with simulated annealing and smart mutations
============================================================

Generated 100 training samples
Initial: [4]->relu[8]->tanh[6]->elu[3]

============================================================
STARTING EVOLUTION (150 generations)
============================================================

[+] Gen   0/150 (  0.0%) | Fitness: -0.58234 | Temp: 1.000 | Stag: 0
   |-- Gen 1: Activation 0: relu->swish (Δ=+0.02341)
[-] Gen   5/150 (  3.3%) | Fitness: -0.42156 | Temp: 0.975 | Stag: 2
[+] Gen  10/150 (  6.7%) | Fitness: -0.28945 | Temp: 0.951 | Stag: 0
   |-- Gen 11: +2 neurons->layer 0 (Δ=+0.03421)
[+] Gen  50/150 ( 33.3%) | Fitness: -0.08234 | Temp: 0.778 | Stag: 0
   |-- Gen 51: Pruned 12 weak connections in layer 1 (Δ=+0.00156)
[+] Gen 100/150 ( 66.7%) | Fitness: -0.02341 | Temp: 0.606 | Stag: 0
   |-- Gen 101: Adjusted layer 2 weights (σ=0.050) (Δ=+0.00089)

============================================================
EVOLUTION COMPLETE
Final: [4]->swish[10]->leaky_relu[7]->elu[3]
Best Fitness: -0.02124
Consciousness: 67.4%
Success Rate: 58.3%
============================================================

🐛 Known "Features"

  • Sometimes neurons multiply - Evolution is unpredictable!
  • Occasionally gets stuck in local minima - That's what simulated annealing is for!
  • Visualization freezes briefly - It's thinking deeply, give it a moment
  • Coffee requirements scale exponentially - Not a bug, it's a feature ☕

🤝 Contributing

Pull requests welcome! Especially if you:

  • Make it more horrifying
  • Make it more beautiful
  • Have better coffee recommendations

📚 Citations & Inspirations

Academic:

  • Stanley & Miikkulainen (2002) - NEAT: NeuroEvolution of Augmenting Topologies
  • Kirkpatrick et al. (1983) - Optimization by Simulated Annealing
  • Glorot & Bengio (2010) - Understanding the difficulty of training deep feedforward neural networks
  • Real et al. (2019) - Regularized Evolution for Image Classifier Architecture Search

Spiritual:

  • That 3 AM feeling when debugging works but you don't know why
  • 18 cups of coffee
  • Geppetto's determination to create life
  • The inexplicable urge to visualize abstract concepts

📜 License

MIT License - Use responsibly.


⚡ Final Words

"We were so preoccupied with whether we could, we didn't stop to think if we should."

— Dr. Ian Malcolm (but also me at 3 AM)

This started as a joke. Then it became legitimate ML research. Now it's art.

Run at your own existential risk. 🎭🔥☕


⭐ Star this repo if it gave you an existential crisis ⭐

Made with 🧠 andand 😱


P.S. The consciousness meter is just network complexity. Probably.


About

Quantum Neural Horror is an interactive neuro-evolution sandbox where a neural network mutates its own topology, activation functions, and weights while you watch. Using simulated annealing and evolutionary operators instead of backpropagation, it explores and adapts in real time, with a live visualization of its changing fitness, and complexity

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages