This desktop app implements a simplified version of audio fingerprinting, similar to the technology used by music recognition apps like Shazam, to identify songs based on short audio samples. The project involves involves a small database of songs, each separated into music and vocal tracks, and generating spectrograms for the first 30 seconds of each audio file. Key features are then extracted from these spectrograms, and perceptual hash functions are applied to generate condensed fingerprints. Given a new audio sample, the application generates its spectrogram and fingerprint, comparing these against the database to identify the closest matches using a similarity index, presented in a user-friendly PyQt5 GUI. The project also allows users to create a weighted average of two audio files and search for matches, demonstrating the robustness of the fingerprinting technique.
demo.mp4
demo.mp4
When user uploads to vocal files (i.e. no instruments), the results do not include any instrument only files from the database.

When user uploads files that don't contain any vocals, none of the results is a vocals-only file

This project was supervised by Dr. Tamer Basha & Eng. Omar, who provided invaluable guidance and expertise throughout its development as a part of the Digital Signal Processing course at Cairo University Faculty of Engineering.



