This repository contains implementations of various computer vision algorithms. Each algorithm has its dedicated directory containing input materials, scripts, and output results. Below is a brief description of each implemented algorithm.
For each algorithm, navigate to its corresponding directory:
input/: Contains input materials required for the algorithm.output/: Contains final and intermediate results produced by the algorithm.
- Implements the Harris corner detection algorithm to detect interest points in an image.
- Matches corresponding points between two images.
- Results are stored in the
output/directory, including an image visualizing matched points.
- Computes the perspective transformation of a given logo.
- Input and output images are stored in the respective directories.
- Implements homography estimation and RANSAC to match scenes between two images.
- Helps in object detection and scene alignment.
- Computes epipolar lines between two images.
- Useful in stereo vision and depth estimation.
-
Processes a video recorded by a human to generate different outputs.
-
Extracts the background from the video, creating a separate background-only video.
-
Generates a foreground video, indicating moving objects in the video.
-
Creates a panorama image by stitching frames together.
-
Stabilizes the video by removing camera shakes.
-
Below are examples of the original and background-extracted videos:
- Uses the Bag of Words (BoW) model to classify and recognize different scenes.
- Extracts feature descriptors and builds a vocabulary for classification.
- Detects vanishing points and lines in images to understand scene geometry.
- Useful in applications like architectural analysis and camera calibration.
- Implements a neural network-based approach for scene recognition.
- Trained on different scene categories for classification.
- Uses Histogram of Oriented Gradients (HOG) to detect faces in images.
- The output includes detected faces with bounding boxes.




