Skip to content

Lev-AI/lazydeve_test_open_the_black_box

Repository files navigation

ML Robustness & Security Research Laboratory

Stress-testing ML models against data drift, adversarial mutations & silent failure

Built with LazyDeve Python Streamlit MLflow SHAP License


⚙️ Project Note: This project is a special test case created to demonstrate the capabilities of the LazyDeve Agent. Project planning, the codebase, documentation, and feature implementation were generated, structured, and committed under the automated supervision of the LazyDeve agent.

https://github.com/Lev-AI/LazyDeve-Agent


🛡️ Concept: Security Research Lab

"Security is not a product, but a process. And AI is the new perimeter."

Modern cybersecurity relies heavily on Machine Learning. However, hackers know the main weakness of these models: they learn from static patterns. By exploiting this, attackers craft "Adversarial Examples" — malicious traffic disguised as normal behavior.

This project is a Red Teaming laboratory designed to demonstrate Model Evasion. It proves that high accuracy on a test set is meaningless if the model is brittle to statistical manipulation.

This project simulates the cyber warfare loop:

  1. Build Defense: Train a baseline classifier (Random Forest) on network traffic.
  2. Simulate Attack: Use the Mutation Engine to inject noise and corrupt features, mimicking how attackers hide their tracks.
  3. Expose Failure: Visualize how "State of the Art" models degrade from 90% to 50% accuracy under attack.
  4. Detect Breach: Use advanced Drift Detection (Evidently AI) to catch attacks that bypass the model's logic.

🎯 Key Features

Feature Description
🛡️ Baseline Training Train RandomForest or XGBoost classifiers. All metrics (Accuracy, F1, Precision) are automatically logged to MLflow.
⚔️ Mutation Engine Three attack modes: Noise (injection), Zeroing (sensor failure/evasion), Swap (protocol mismatch). Adjustable intensity.
🚨 Drift Detection Integration of Evidently AI (DataDriftPreset) to generate professional HTML reports on statistical data shifts.
📉 Explainability (X-Ray) Uses SHAP (TreeExplainer) to X-ray the model: revealing exactly which features drove the AI's decision.
📊 Interactive Dashboard Full Streamlit UI: Data & Baseline → Attack Lab → X-Ray → Drift Monitor. Run experiments without writing code.
📋 Automated Reports Automatic generation of experiment artifacts and datasets.

🚀 Quick Start

1. Clone & Install

git clone https://github.com/Lev-AI/lazydeve_test_open_the_black_box.git
cd lazydeve_test_open_the_black_box
pip install -r requirements.txt

2. Generate Synthetic Data

python src/generate_synthetic.py

Creates data/synthetic_data.csv — a balanced dataset for experiments.

3. Launch Dashboard

streamlit run src/dashboard.py

Windows users can simply use the one-click launcher:

run_dashboard.bat

🏗️ Project Architecture (LazyDeve Structure)

enter-the-black-box/
│
├── src/
│   ├── data_loader.py
│   ├── baseline_model.py
│   ├── mutation_engine.py
│   ├── drift_detector.py
│   ├── robustness_eval.py
│   ├── explainability.py
│   ├── report_generator.py
│   ├── dashboard.py
│   └── generate_synthetic.py
│
├── notebooks/
├── data/
├── docs/
├── mlruns/
├── run_dashboard.bat
└── README.md

👤 Credits

Created & Engineered by LazyDeve Agent 🤖
Under the supervision of Kapitan Lev ⚓

AI-powered Cyber Analyst & ML Researcher
License: MIT — free for education and research.

About

AI-generated ML security research lab created with ChatGPT and LazyDeve to test model robustness, adversarial attacks, and data drift.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors