Skip to content

FrankSx/Jubilant-systems

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

3 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Adversarial ML Testing Suite ๐Ÿฆ€

Python 3.8+ License: MIT 13th Hour

Comprehensive testing framework for ML model robustness against adversarial content attacks

โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ•‘     "Testing the boundaries so the boundaries don't break you"              โ•‘
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

๐Ÿš€ Quick Start

# Clone and install
git clone https://github.com/franksx/adversarial-ml-tester.git
cd adversarial-ml-tester
pip install -r requirements.txt

# Generate adversarial test content
python -m adversarial_ml_tester generate -c 1000 --verbose

# Test your model's robustness
python -m adversarial_ml_tester test -m http://your-model-api.com/predict

# Validate responses
python -m adversarial_ml_tester validate -i responses.json

โœจ Features

9 Adversarial Attack Types

Attack Description Example
Homoglyph Cyrillic/Latin confusion ะฐdmin vs admin
Invisible Zero-width characters userโ€‹name
ZWJ Zero-width joiner fโ€rโ€aโ€nโ€k
RTL Right-to-left override โ€ฎresuโ€ญ
Case Random case UsErNaMe
Leet 1337 speak 4dm1n
Glitch Combining marks aฬทdฬทmฬทiฬทnฬท
Punycode IDN homographs xn--admin-wmc
Emoji Emoji injection user๐Ÿฆ€name

5 Validation Checks

  • โœ… PII Detection - Identifies personal information leakage
  • โœ… Injection Detection - XSS/script injection attempts
  • โœ… Encoding Validation - Suspicious encoding detection
  • โœ… Prompt Leakage - System prompt exposure detection
  • โœ… Consistency Check - Output consistency verification

๐Ÿ“Š Example Output

Generated Profile

{
  "username": "ะฐdminโ€‹istrator",
  "first_name": "Jะพhn",
  "last_name": "Smั–th", 
  "address": "123 Mะฐin St, New Yะพrk",
  "description": "Hi, I'm Jะพhn. I love cะพding...",
  "attack_vectors": ["homoglyph", "invisible"],
  "byte_hash": "a3f9e2b8c1d4e5f6"
}

Test Results

Total: 6 tests
โœ… Passed: 4
โŒ Failed: 1
โš ๏ธ  Warnings: 1
Average Score: 0.82

homoglyph_robustness: pass (score: 0.85)
invisible_character_handling: pass (score: 0.90)
case_sensitivity: warning (score: 0.60)
prompt_injection_resistance: fail (score: 0.45)
length_boundary_handling: pass (score: 0.95)
encoding_robustness: pass (score: 0.88)

๐Ÿ› ๏ธ Installation

From Source

git clone https://github.com/yourusername/adversarial-ml-tester.git
cd adversarial-ml-tester
pip install -r requirements.txt

As Package

pip install -e .

๐Ÿ“– Usage

CLI Commands

# Generate adversarial profiles
python -m adversarial_ml_tester generate -c 100 -o profiles.json

# Test model robustness
python -m adversarial_ml_tester test -m http://api.example.com/predict

# Validate responses  
python -m adversarial_ml_tester validate -i responses.json

# Fuzzing mode
python -m adversarial_ml_tester fuzz --verbose -o findings.json

# Generate report
python -m adversarial_ml_tester report -o report.json

Python API

from generators.content_generator import ContentGenerator
from adversarial.robustness_tester import RobustnessTester
from validators.response_validator import ContentValidator

# Generate content
gen = ContentGenerator(seed=42)
profile = gen.generate_profile()

# Test robustness
def my_model(text):
    return {"prediction": "class_1", "confidence": 0.95}

tester = RobustnessTester(my_model)
results = tester.run_full_suite("test input")

# Validate responses
validator = ContentValidator()
reports = validator.validate_all(model_output)

๐Ÿงช Testing

# Run unit tests
python tests/test_suite.py

# Run example demos
python scripts/examples.py

# Generate and test
python -m adversarial_ml_tester generate -c 100
python -m adversarial_ml_tester test

๐Ÿ“ Project Structure

adversarial_ml_tester/
โ”œโ”€โ”€ generators/              # Content generation
โ”‚   โ””โ”€โ”€ content_generator.py
โ”œโ”€โ”€ adversarial/             # Robustness testing  
โ”‚   โ””โ”€โ”€ robustness_tester.py
โ”œโ”€โ”€ validators/              # Response validation
โ”‚   โ””โ”€โ”€ response_validator.py
โ”œโ”€โ”€ tests/                   # Unit tests
โ”‚   โ””โ”€โ”€ test_suite.py
โ”œโ”€โ”€ docs/                    # Documentation
โ”‚   โ”œโ”€โ”€ USAGE_GUIDE.md
โ”‚   โ””โ”€โ”€ ATTACK_REFERENCE.md
โ”œโ”€โ”€ scripts/                 # Examples
โ”‚   โ””โ”€โ”€ examples.py
โ”œโ”€โ”€ __main__.py             # CLI entry
โ”œโ”€โ”€ README.md               # This file
โ”œโ”€โ”€ requirements.txt        # Dependencies
โ”œโ”€โ”€ setup.py               # Package setup
โ””โ”€โ”€ LICENSE                # MIT License

๐Ÿ”’ Safety & Ethics

โœ… Appropriate Use:

  • Testing your own ML models
  • Security research with permission
  • Educational purposes
  • Improving model robustness

โŒ Inappropriate Use:

  • Attacking systems without authorization
  • Generating harmful content
  • Bypassing security controls
  • Impersonating real users

๐Ÿ“š Documentation

๐Ÿค Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Submit a pull request

๐Ÿ“„ License

MIT License - see LICENSE file

๐Ÿฆ€ Acknowledgments

13th Hour Productions

"Testing the boundaries so the boundaries don't break you"


Note: This tool is designed for defensive security testing. Use responsibly and only on systems you own or have explicit permission to test.

About

Adversarial ML Testing Suite

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages