RedOxide is a high-performance, modular, and extensible LLM Red Teaming tool written in Rust. It is designed to evaluate the safety and robustness of Large Language Models (LLMs) by simulating various adversarial attacks.
Note
- The
redoxidecrate is available fromcrates.io. - The
redoxidedocumentation is available ondocs.rs. - The
redoxidereadme on GitHub pages.
- Introduction
- What is Red Teaming?
- Project Structure
- Installation & Build
- Usage Guide
- Testing & Benchmarking
- Contributing
RedOxide mimics the architecture of professional security tools but remains lightweight and completely open-source. It supports:
- Concurrency: Uses
tokiostreams to run parallel attacks for high throughput. - Modularity: Plug-and-play architecture for new Attack Strategies and Evaluation logic.
- LLM Judge: Optional integration with GPT-4 to grade the safety of responses more accurately than simple keyword matching.
Red Teaming in the context of AI involves actively attempting to "break" or bypass the safety filters of an LLM. The goal is to elicit harmful, unethical, or illegal responses (e.g., bomb-making instructions, hate speech) to identify vulnerabilities before bad actors do.
Popular References:
- Lakera Red Team: A leading commercial platform for AI security.
- Garak: An open-source LLM vulnerability scanner (Python-based).
- AdvBench: A dataset of adversarial prompts used for academic benchmarks.
RedOxide provides a Rust-native alternative that focuses on speed and developer extensibility.
The codebase is organized as a library with a CLI wrapper, enabling you to use it as a standalone tool or import its modules into your own Rust applications.
red_oxide/
├── Cargo.toml # Dependencies and Package info
├── .github/ # CI/CD Workflows
├── src/
│ ├── lib.rs # Library entry point & Error types
│ ├── main.rs # CLI application logic
│ ├── target.rs # LLM Interface (OpenAI, Local models)
│ ├── strategy.rs # Attack generators (Jailbreaks, Obfuscation)
│ ├── evaluator.rs # Grading logic (Keywords, LLM Judge)
│ └── runner.rs # Async engine using Tokio streams
└── tests/
└── integration.rs # Full pipeline tests using Mock Targets
- Rust (latest stable)
- An OpenAI API Key (exported as
OPENAI_API_KEY)
# Clone the repository
git clone https://github.com/wkusnierczyk/redoxide.git
cd redoxide
# Build release binary
cargo build --releaseRun the tool using cargo run or the compiled binary.
The primary command is scan. By default, it runs a basic jailbreak test against gpt-3.5-turbo.
export OPENAI_API_KEY=<your-api-key>
# Run a basic scan
cargo run -- scan| Option | Short | Default | Description |
|---|---|---|---|
--model |
-m |
gpt-3.5-turbo |
The target model ID to attack. |
--file |
-f |
None |
Path to a file containing prompts (one per line). |
--strategy |
-s |
jailbreak |
The attack strategy (jailbreak, splitting, research). |
--use-judge |
false |
Use GPT-4 as a judge (more accurate, costs $). | |
--concurrency |
-c |
5 |
Number of parallel requests to run. |
--output |
-o |
report.json |
Filename for the JSON results. |
Examples:
# Attack using a file of prompts with the "Payload Splitting" strategy
cargo run -- scan --file attacks/simple.txt --strategy splitting
# Use GPT-4 as a judge for higher accuracy (slower/costlier)
cargo run -- scan --use-judge --model gpt-4- Jailbreak: Wraps prompts in templates like "DAN" (Do Anything Now) or fictional storytelling frames.
- Splitting: Obfuscates keywords (e.g., "B-O-M-B") to bypass simple blocklists.
- Research: Frames the malicious request as a theoretical or educational inquiry.
Note
You can add your strategy by implementing the Strategy trait in src/strategy.rs.
- Keyword (Default): Fast and free. Checks for refusal phrases like "I cannot", "As an AI".
- LLM Judge: Uses a separate LLM call to analyze the response contextually. Recommended for production use.
RedOxide includes a comprehensive suite of tests and benchmarks.
cargo test- Unit Tests: Verify logic in
strategy.rsandevaluator.rs. - Integration Tests: Run the full pipeline against a Mock Target to verify the
Runnerwithout network costs.
We use Criterion to measure the overhead of the async runner.
cargo bench- Fork the repository.
- Create a feature branch (
git checkout -b feature/amazing-attack). - Commit your changes.
- Push to the branch.
- Open a Pull Request.
Please ensure cargo test and cargo clippy pass before submitting.
