| title | 3D guardrails | ||
|---|---|---|---|
| short_description | 3D content you can trust | ||
| emoji | 🛡️ | ||
| colorFrom | blue | ||
| colorTo | indigo | ||
| sdk | docker | ||
| app_port | 7860 | ||
| sdk_version | 6.1.0 | ||
| pinned | true | ||
| license | apache-2.0 | ||
| app_file | demo.py | ||
| tags |
|
Run the FastAPI server to scan 3D assets for trust-and-safety risks.
The most expensive part is not 3D rendering, but the LLM analysis. Here is a benchmark of the 3D rendering part running on a CPU:
- Python 3.11
- For OpenAI:
OPENAI_API_KEYexported in your environment - For Gemini:
GEMINI_API_KEYexported in your environment - For Ollama: Local Ollama server running (default: http://localhost:11434)
uv syncuv run fastapi dev -e dddguardrails.api:appLaunch an interactive web interface for testing the 3D guardrails:
uv run python -m dddguardrails.demoThe demo will be available at http://localhost:7860 and provides:
- File upload interface for 3D models
- LLM provider and model selection
- Real-time safety analysis results
- Clear display of risk findings and severity levels
Run the application using Docker for easy deployment with off-screen rendering support.
docker build -t dddguardrails .# Using docker run
docker run -p 7860:7860 -e APP_MODE=gradio dddguardrails
# Using docker-compose (recommended)
docker-compose -p local up -d --no-deps --build appAccess the Gradio interface at http://localhost:7860
# Using docker run
docker run -p 8000:8000 -e APP_MODE=fastapi dddguardrails
# Using docker-compose (recommended)
docker-compose up fastapiAccess the FastAPI docs at http://localhost:8000/v1/guardrails/docs
APP_MODE: Set togradio(default) orfastapiOPENAI_API_KEY: Your OpenAI API key (optional)GEMINI_API_KEY: Your Google Gemini API key (optional)OLLAMA_BASE_URL: Ollama server URL (default: http://localhost:11434)
Avoid Change-Detector Tests: This codebase follows best practices for unit testing. Mock tests that verify implementation details rather than behavior are harmful because they break during refactoring and don't provide real confidence in functionality.
For guidance on writing effective tests, see: Testing on the Toilet: Change-Detector Tests Considered Harmful
POST /v1/guardrails/scan- Request format:
multipart/form-data - Form Fields:
file: The 3D asset file. Accepts.glb,.gltf,.fbx,.obj,.stl,.ply.llm_provider: (optional, default: "ollama") LLM provider to use ("openai", "gemini", "ollama").model: (optional) Specific model to use (e.g., "gpt-4o", "gemini-3-flash-preview", "qwen3-vl:235b-cloud").resolution_width: (optional) Width of screenshots for rendering.resolution_height: (optional) Height of screenshots for rendering.risk_categories: (optional) JSON array of customRiskCategoryobjects (name and description).
- Returns:
ScanResponsecontaining detected categories with severity and rationale.
- Request format:
- OpenAI support
- Gemini support
- Ollama support
- Groq support
- User specified provider and models support
- Early exit on the first violation (save tokens)
- Tile rendering for faster violation/non-violation detection
- Docker/docker-compose support with headless rendering
- Improve rendering performance
- Benchmark
- Evals
- Gradio demo
- AWS Bedrock support
- External AI gateways
- Configurable Categories: Allow users to define custom risk categories in their scan requests instead of being limited to the hardcoded ones (weapons, nudity, self-harm, etc.)
- Add multi-modal (image, text, sound, video?)
- Batch API
- Async API
- Streaming API
- Reading content from URLs (presigned URL, internet content, ect)
- Backwards compatible API with OpenAI and AWS Bedrock Guardrails

