Skip to content

Hirak010/Real-time-surveillance-detection

Repository files navigation

Real Time Video Surveillance And Triggering System

There is a critical need for an automated surveillance solution that can continuously and accurately monitor environments for signs of emergencies, providing real-time alerts to facilitate rapid intervention. Traditional systems relying on human operators are often slow and prone to errors, which can result in severe consequences, including loss of life and extensive propertydamage. Our project aims to develop a real-time emergency surveillance system that leverages computer vision to detect and respond to critical situations such as fires, violence, and medical emergencies.

alt text

alt text

Project Demonstration Link:

GDrive Link to Demo:

Dataset link

https://www.kaggle.com/datasets/mohamedmustafa/real-life-violence-situations-dataset

STEPS to run the project:

STEP 01:

Clone the repository

git clone https://github.com/Hirak010/Real-time-surveillance-detection.git

STEP 02:

Create an environment & activate

conda create -n env python=3.11 -y
conda activate ./env

STEP 03:

Install the requirements

pip install -r requirements.txt

STEP 04:

To run the webcam app

python alert.py

Technical Aspects

Human Fall Detection

Methodology

alt text

  • You can check distance between foot C.G and body C.G
  • It's body center of gravity
  • It's foot center of gravity
  • You can check the count of how many times he fell.
  • If distance over 90 pixel(tall * 0.75), It displays he falling.
    1. Body C.G and foot C.G. 2. Only use X axis. 3. I use this distance difference to basis of judgment
  • It indicates if he woke up. (This only displays after falling.)
  • A count of 1 goes up in the area where you fell (the count is determined by which area your feet are in).

Violence Detection

Methodology

alt text

  • A dataset having 1000 videos each of violence category and non violence category was chosen
  • A model was trained using MobileNetV2 using the dataset
  • Real time video footage is given as input
  • Output is obtained as image frames
  • Use MobileNet V2 archvhitecture
  • It is a Convolutional neural network that is 53 layers deep
  • Provides real time classification capabilities under computing constraints in devices like smartphones.
  • Utilizes an inverted residual structure where the input and output of the residual blocks are thin bottleneck layers.
  • Uses lightweight convolutions to filter features in the expansion layer.

Fire Detection

Methodology

alt text

  • Loading the Pre-Trained Model: The cv2.CascadeClassifier is used to load a pre-trained fire detection model from an XML file. The file contains data from a model trained on images with and without fire, allowing it to detect fire patterns in new images.
  • How Cascade Classifier Works: The model processes video frames by scaling the image and sliding a window across different regions. Features like edges and textures are extracted from each window and compared against the patterns in the pre-trained model. The classifier uses a cascading process, quickly eliminating areas without fire and focusing on regions that potentially contain fire.
  • Real-Time Detection: The system captures frames from a video feed, applies the trained model to detect fire, and triggers an alarm sound if fire is detected.

Authors:

Authors: Hirakjyoti Medhi, Biswajit Bera, Ashutosh Kumar and Roshan Jha
Email: hirak170802@gmail.com

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors