A real-time computer vision system built with Python and OpenCV, designed to detect unauthorized motion and track targets in a designated perimeter. This project simulates foundational surveillance and perimeter security protocols commonly used in defense systems.
- Adaptive Background Modeling: Utilizes
MOG2algorithm to continuously learn the environment and filter out minor lighting changes or shadows. - Real-time Video Processing: Processes live webcam feeds or recorded surveillance footage.
- Smart Filtering: Ignores minor environmental noise (e.g., wind, small shadows) via morphological operations (dilation) to prevent false alarms.
- Live HUD Overlay: Displays real-time status, timestamps, and target bounding boxes.
- Language: Python 3.x
- Computer Vision: OpenCV (
cv2) - Data Handling: NumPy
- Clone this repository:
git clone [https://github.com/YOUR_USERNAME/autonomous-intrusion-detection.git](https://github.com/YOUR_USERNAME/autonomous-intrusion-detection.git)
cd autonomous-intrusion-detection- Install the required dependencies:
pip install -r requirements.txt- Run the system:
python main.pyDuring system testing, the following expected behaviors of the motion-based architecture were observed:
- Stationary Target Assimilation: Since the system relies on adaptive background modeling, a target that remains completely motionless for a certain period will be absorbed into the background model and classified as "Safe" (similar to camouflage).
- Camera Motion Sensitivity: The system assumes a static camera feed (e.g., a wall-mounted perimeter camera). Moving the camera itself shifts all pixels, causing the background subtractor to detect massive frame-wide motion (False Positive).
- Object Detection Integration: Upgrading the pipeline with a trained AI model (e.g., YOLO) to identify specific object classes (human, vehicle) regardless of their movement or camera shake.
- Optical Flow / Stabilization: Implementing camera motion compensation to allow deployment on moving platforms like UAVs (Drones) or UGVs.