A high-performance, Python-based offscreen 3D camera simulation engine. It renders a virtual scene with a rotating camera using OpenGL and publishes the result as a real-time MJPEG IP camera stream, designed specifically for imaging system simulation experiments and computer vision testing.
The Multi-Camera Simulation Engine provides a lightweight virtual environment for researchers and developers to simulate complex imaging systems. By leveraging thread-safe offscreen rendering and MJPEG streaming, it provides a "virtual" IP camera feed that can be consumed by standard video clients or computer vision pipelines without requiring a physical camera or a visible UI.
This project was conceived to demonstrate the feasibility of high-fidelity imaging system simulation tools, particularly for consumer hardware products. It aligns with advanced industry workflows involving:
- System Design & Validation: Defining simulation capabilities and performance goals to validate against real hardware.
- Multi-Camera R&D: Investigating imaging performance goals and spatial configurations for multi-camera systems (including XR applications).
- Cross-Functional Collaboration: Bridging the gap between Product Design (PD), Electrical Engineering (EE), Camera Hardware, and Software teams through early-stage virtual prototyping.
This repository represents the beginning of a medium-to-large scale open-source project aimed at providing accessible, professional-grade simulation tools for the computer vision and hardware engineering communities.
- Computer Vision Development: Validate tracking and detection algorithms against ground-truth controlled virtual environments.
- Imaging System R&D: Simulate varying camera parameters (FOV, resolution, placement) before physical deployment.
- Robotics & XR Simulation: Provide low-latency visual feedback for autonomous agent training and augmented reality testing.
- Multi-Camera Support (v1.3.1): Simultaneously render and stream from multiple virtual cameras, each with independent configurations and unique MJPEG stream ports.
- Real-time 3D Rendering: High-performance rendering via OpenGL 3.3+ with Phong shading (ambient, diffuse, specular) and directional lighting.
- Thread-Safe Offscreen Pipeline: Uses hidden GLFW windows and custom Framebuffer Objects (FBOs) for background rendering, optimized for multi-threaded Flask environments.
- MJPEG IP Camera Stream: Efficiently broadcasts rendered frames over HTTP, mimicking a real network-attached camera.
- Dynamic Orbit Camera: Configurable virtual camera with adjustable FOV, near/far planes, and procedural orbit logic.
- Procedural Scene Elements: Includes a built-in ground plane with checkerboard textures for spatial reference and a central cube.
- Modular Architecture: Clean separation between core logic, rendering, streaming, and web interfaces for easy extensibility.
Multi-Camera-Simulation-Engine/
βββ config/
β βββ settings.json
βββ core/
β βββ app.py
β βββ state.py
βββ doc/
β βββ image003_1.png
β βββ image003_2.png
β βββ video003_01.mp4
βββ doc.me/
βββ effects/
β βββ image_effects.py
βββ render/
β βββ camera.py
β βββ renderer.py
βββ stream/
β βββ mjpeg_stream.py
βββ utils/
β βββ helpers.py
βββ web/
β βββ routes.py
β βββ static/
β βββ templates/
β βββ index.html
βββ LICENSE
βββ main.py
βββ README.md
βββ repo_info.txt
βββ requirements.txt
- Python 3.8+
- OpenGL 3.3+ compatible graphics hardware
- GLFW library (usually handled via
glfwpython package, but requires system-level OpenGL drivers)
git clone https://github.com/your-username/Multi-Camera-Simulation-Engine.git
cd Multi-Camera-Simulation-Engine# Create and activate virtual environment
python -m venv venv
source venv/bin/activate # Linux/macOS
# or
venv\Scripts\activate # Windowspip install -r requirements.txtLaunch the engine with a single command:
python main.pyOnce running, the engine starts a local Flask server:
- Web Dashboard: http://localhost:5000
- Camera 0 Stream: http://localhost:5001/cam
- Camera 1 Stream: http://localhost:5002/cam
(Note: Each additional camera is assigned a unique port starting from 5001, while the main dashboard remains on 5000.)
When you access the direct stream or the web dashboard, you will see the real-time offscreen rendered view. Version 1.3.1 introduces multi-camera support, allowing for independent streams from multiple virtual cameras:
(Note: Example of multiple independent camera feeds served via MJPEG over HTTP)
The Multi-Camera Simulation Engine is built around a modular architecture to ensure scalability:
MultiCamSimApp(core/app.py): The central orchestrator. It initializes theAppState,Renderer, andMjpegStreamerand registers the web routes.AppState(core/state.py): Manages global application state and loads configuration fromsettings.json.Renderer(render/renderer.py): The heart of the 3D visualization. It handles:- GLFW Context Management: Initializes a hidden GLFW window and manages the OpenGL context, crucial for thread-safe rendering in a multi-threaded Flask environment.
- Shader Program: Compiles GLSL vertex and fragment shaders for Phong shading, enabling realistic lighting (ambient, diffuse, specular) and object coloring (including a checkerboard ground plane).
- Geometry Buffers: Sets up Vertex Array Objects (VAOs), Vertex Buffer Objects (VBOs), and Element Buffer Objects (EBOs) for rendering 3D objects.
- Framebuffer Object (FBO): Renders directly to an off-screen framebuffer, allowing the rendered image to be read back into a NumPy array without displaying a window.
MjpegStreamer(stream/mjpeg_stream.py): Responsible for encoding rendered frames into MJPEG format and serving them over HTTP using OpenCV and Flask Response.
- Basic Scene: The 3D scene is currently limited to a cube and a ground plane.
- Passive Interface: The web dashboard is read-only; no interactive controls for the camera are available yet.
- Model Loading: Support for loading complex 3D models (OBJ, GLTF).
- Interactive UI: Real-time controls for camera movement, FOV, and lighting parameters via the web dashboard.
- Post-Processing: Integration of image effects (noise, lens distortion, color correction).
- Plugin System: Formal plugin system for adding new rendering effects or camera types.
- REST API: Programmatic control of simulation parameters via a dedicated API.
Contributions are welcome! Whether it's adding support for multiple cameras, improving the UI, or implementing new post-processing effects.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Distributed under the MIT License. See LICENSE for more information.
Sayed Ahmadreza Razian, PhD
LinkedIn
https://www.linkedin.com/in/ahmadrezarazian/
Google Scholar
https://scholar.google.com/citations?user=Dh9Iy2YAAAAJ
Email
AhmadrezaRazian@gmail.com
Feel free to contact me for collaboration or questions.
Developed for advanced imaging system simulation and computer vision research.
