Security Research Toolkit — Video and image analysis tool for neural inpainting and AI-generated content detection with SORA signature extraction, temporal consistency analysis, CNN artifact detection, CPU/CUDA device selection, multi-format support, and colorama-styled terminal interface
__ _______ _ _ _____ _____ _ _
\ \ / / ____| | | |/ ____| / ____| (_) |
\ \ /\ / / (___ | | | | (___ | (___ ___ ___ _ _ _ __ _| |_ _ _
\ \/ \/ / \___ \| | | |\___ \ \___ \ / _ \/ __| | | | '__| | __| | | |
\ /\ / ____) | |__| |____) | ____) | __/ (__| |_| | | | | |_| |_| |
\/ \/ |_____/ \____/|_____/ |_____/ \___|\___|\__,_|_| |_|\__|\__, |
__/ |
_____ _ _______ _ _ _|___/
| __ \ | | |__ __| | | | (_) |
| |__) |___ ___ ___ __ _ _ __ ___| |__ | | ___ ___ | | | ___| |_
| _ // _ \/ __|/ _ \/ _` | '__/ __| '_ \ | |/ _ \ / _ \| | |/ / | __|
| | \ \ __/\__ \ __/ (_| | | | (__| | | | | | (_) | (_) | | <| | |_
|_| \_\___||___/\___|\__,_|_| \___|_| |_| |_|\___/ \___/|_|_|\_\_|\__|
Video & image analysis for neural inpainting and AI-generated content detection
Features • Getting Started • Configuration • Usage • Project Structure • FAQ
| Resource | URL |
|---|---|
| Repository | https://github.com/timanmoh/Security-Research-Toolkit |
| Issues | https://github.com/timanmoh/Security-Research-Toolkit/issues |
| OpenAI Sora | https://openai.com/index/sora |
| Reality Defender (Sora detection) | https://www.realitydefender.com/insights/detecting-sora-videos |
|
Analysis
Detection
|
Processing
Interface
|
- Python 3.4 or higher
- Optional: NVIDIA GPU with CUDA for accelerated processing
- Optional: PyTorch with CUDA support (auto-detected)
git clone https://github.com/timanmoh/Security-Research-Toolkit.git
cd Security-Research-Toolkit
pip install -r requirements.txt
python main.py| Package | Version | Purpose |
|---|---|---|
| colorama | ≥0.4.6 | Cross-platform colored console output |
Note: Neural inpainting and detection modules use default implementations. Advanced models (AOT-GAN, LaMa, ProPainter) can be integrated for higher accuracy.
Settings are stored in settings.json in the project root. Edit via menu Settings or command settings.
Example settings.json:
{
"device": "auto",
"output_dir": "./output",
"log_level": "info",
"language": "en"
}| Parameter | Values | Description |
|---|---|---|
device |
auto, cpu, cuda |
Compute device; auto selects GPU if available |
output_dir |
path | Output directory (relative or absolute); created on first save |
log_level |
debug, info, warning, error |
Console verbosity |
language |
en, ru |
Interface language |
Tip: Use
device=cpufor headless servers without GPU. Uselog_level=debugwhen troubleshooting.
python main.py
# or
python main.py --interactiveMain menu:
┌──────────────────────────────────────────────────────────────┐
│ [1] Install dependencies │
│ [2] Start │
│ [3] About │
│ [4] Settings │
└──────────────────────────────────────────────────────────────┘
Choose [1]-[4]: 2
Command menu:
> run C:\videos\sample.mp4
> validate ./images/frame_001.png
> gpu
> settings
> help
> quit
python main.py --run <video_or_image_path>
python main.py --validate <path>
python main.py --gpu| Command | Description |
|---|---|
run <path> |
Process video or image (detect + inpainting pipeline) |
validate <path> |
Validate input file and report type (video/image) |
gpu |
Show NVIDIA GPU status (nvidia-smi) |
install / 1 |
Install dependencies from requirements.txt |
about |
Show project info from README |
settings |
View/edit settings (device, output_dir, log_level, language) |
help |
Show command menu |
quit / exit |
Exit application |
Security-Research-Toolkit/
├── main.py # Entry point (CLI, interactive menu)
├── settings.json # User settings (created on first save)
├── requirements.txt # Python dependencies
├── README.md
│
├── gui/
│ ├── __init__.py
│ └── main_window.py # Terminal interface (banner, menu, sections)
│
├── core/
│ ├── __init__.py
│ ├── processor.py # Video/image processing pipeline
│ ├── inpainting.py # Neural inpainting (region filling)
│ └── validator.py # Input path validation
│
├── detection/
│ ├── __init__.py
│ ├── detector.py # CNN region/artifact detection
│ ├── signature.py # SORA signature analysis
│ └── temporal.py # Temporal consistency, interpolation
│
└── utils/
├── __init__.py
├── settings.py # Load/save settings.json
├── file_handler.py # File operations, path sanitization
├── gpu_manager.py # GPU info, device suggestion
└── logger.py # Logging utilities
What video formats are supported?
MP4, AVI, MKV, MOV, WebM, WMV, and FLV. Image formats: PNG, JPG, JPEG, BMP, WebP, TIFF, TIF.
Does it work without a GPU?
Yes. The toolkit runs on CPU by default. Set device=cpu in settings or use auto to let the app detect CUDA availability. GPU acceleration is optional.
What is SORA signature detection?
SORA is OpenAI's text-to-video model. The detection module analyzes temporal consistency, generation artifacts, and feature signatures to identify AI-generated content. The current implementation provides extensible interfaces for integrating custom classifiers.
How do I add real neural inpainting?
Extend the base implementation in core/inpainting.py with a model such as AOT-GAN, LaMa, or ProPainter. The inpaint_frame(frame_region, mask_region, device) function expects numpy/tensor input and returns the inpainted region.
Settings are not saving. What to check?
Ensure the project directory is writable. settings.json is created in the project root. On Windows, run as administrator if the folder has restricted permissions. Check that only valid keys (device, output_dir, log_level, language) and values are used.
nvidia-smi not found / GPU not detected
Install NVIDIA drivers and ensure nvidia-smi is in PATH. On Linux, it is typically in /usr/bin. The toolkit falls back to CPU if no GPU is detected. PyTorch CUDA support is optional and checked at runtime.
Can I use this for production?
This is a research/educational toolkit. Detection and inpainting modules are extensible base implementations. For production use, integrate validated models, add error handling, and perform security audits. See the Disclaimer below.
This project is intended exclusively for educational and security research purposes. Use it only on content you own or have explicit permission to analyze. Do not use it to create, distribute, or analyze deepfakes or manipulated media for deceptive purposes. The authors are not responsible for misuse. AI-generated content detection is an evolving field; results may be inaccurate. Always comply with local laws and platform terms of service.
If this project helped your research, consider giving it a star.
ETH: 0x6f1A3c5E9B2a4D6e8C0b3F5a7D9c1E3b5A7f28e1