The product is openly accessible at http://137.229.25.190:5000/, the entry gate is "Wildfire Management System".
This project is a Flask + Socket.IO web product for frame-by-frame analysis and visualization of wildfire videos (fire and smoke). It supports two processing pipelines:
- Stable Camera Analysis: fuses classical pixel segmentation with YOLO-OBB detection to estimate fire area, track fire spread direction, and infer smoke/wind direction via optical flow.
- YOLO Detection View: a lightweight pipeline that runs YOLO-OBB inference and overlays oriented bounding boxes (OBB) for visualization.
On the /upload page, users upload a video and configure key parameters (e.g., resize factor, sliding-window sizes, whether to enable stable-camera analysis, whether to enable YOLO detection, etc.).
To convert pixel area into real-world area (m²), the system reads drone/camera metadata (e.g., sensor width and focal length) or uses user-provided camera parameters, and combines them with objectDistance (distance to target) to estimate ground sampling distance (GSD).
This pipeline processes the video frame-by-frame:
- Read frames and resize by
size_factorto reduce compute and stabilize throughput.
The fire mask is built by combining two sources:
- Classical fire pixel segmentation (
fire_pixel_segmentation()): applies a set of heuristic rules in YCrCb and RGB spaces (channel thresholds and relationships) to extract candidate fire pixels, followed by morphological cleanup. - YOLO-OBB fire mask (
run_yolo_fire_mask()): runs OBB detection and fills each detected OBB polygon to create a binary fire mask. - Fusion strategy: the two masks are blended using a weight
alpha, then thresholded into a final, more robust binary fire mask. This typically improves robustness by combining deep model localization with traditional segmentation detail.
- Extract contours from the fire mask and compute the fire pixel area per frame.
- Convert pixel area to real-world area (m²) using GSD (computed from sensor width, focal length, object distance, and image width).
- Track centroid motion of the fire region over time; apply a sliding window (
mFrames) to smooth the direction estimate and draw a spread-direction arrow. The pipeline outputs an estimated spread direction angle.
- Generate a smoke mask with
smoke_pixel_segmentation()using HSV-based thresholding (plus inversion and morphology to isolate smoke candidates). - Estimate motion with Farneback optical flow (
cv2.calcOpticalFlowFarneback) in the smoke region, and draw local flow arrows. - Aggregate flow directions into a per-frame “smoke/wind direction angle,” then smooth with a sliding window (
nFrames) for a stable directional estimate.
Each frame is assembled into a 2×2 panel (e.g., original frame / fire mask / smoke mask / overlay visualization), JPEG-encoded and Base64-packed, then pushed to the frontend via the stable_update Socket.IO event.
After processing finishes, the system aggregates and produces summary figures:
- Fire area (m²) over time
- Fire area growth rate over time
- Polar distribution of smoke/wind directions
- Fire spread path (trajectory of centroid points)
These plots are produced by analysis.py::graph(), encoded as Base64 PNG, and sent to the frontend via the analysis event.
This pipeline is optimized for “detection-only” visualization:
- Run YOLO-OBB on each frame, read
results[0].obb.xywhr(center x/y, width, height, rotation), - Convert each OBB to a polygon and draw it on the frame,
- Overlay the number of detections.
Processed frames are JPEG-encoded + Base64 and emitted to the frontend via the yolo_update Socket.IO event.
The website code files are under srcVikas/code_web:
app.py: web routes, video upload, thread orchestration, Socket.IO streaming (main entrypoint)yolo_detection.py: YOLO-OBB model loading and inference; supports OBB overlay and OBB-derived binary masksfire_flow.py: fire pixel segmentation, real-world area estimation, and fire spread trackingsmoke_flow.py: smoke segmentation and optical-flow-based direction estimationanalysis.py: aggregated analytics plots (area trends, growth rate, direction distribution, spread path)templates/index.htmletc.: frontend pages (upload, model, team pages, and visualization UI)
- Real-time visualizations: original frames, fire/smoke masks, overlay views, YOLO OBB detections
- Quantitative analysis:
- Fire area (m²) over time
- Fire area growth rate
- Smoke/wind direction distribution
- Fire spread path and direction angle estimates
- Run
app.py, it opens the website. Then click on the "Wildfire Management System" button to navigate to/uploadto submit a video for analysis. - Socket.IO streams frames and analytics plots to the frontend in real time.
- The stable-camera pipeline assumes limited camera motion. If the camera shakes strongly, consider video stabilization or camera-motion compensation; otherwise, optical flow and trajectory estimates can be degraded.
- If stable analysis and YOLO detection run concurrently, ensure YOLO inference is thread-safe (e.g., use an inference lock or one model instance per thread) to avoid concurrency issues during initialization/fusion.