Skip to content

This project focuses on multi-class image classification of sports images using Deep Learning. The dataset consists of 100 different sports categories, and multiple CNN-based models were trained and evaluated to compare performance.

Notifications You must be signed in to change notification settings

Droid-DevX/Sports_Image_classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 

Repository files navigation

Sports Image Classification (100 Classes)

Project Overview

This project focuses on multi-class image classification of sports images using Deep Learning.
The dataset consists of 100 different sports categories, and multiple CNN-based models were trained and evaluated to compare performance.

The objective was to:

  • Build baseline CNN models
  • Apply regularization techniques
  • Use transfer learning (MobileNetV2)
  • Compare test performance of all models

Dataset

  • 100 sports categories
  • Images resized to 224Γ—224
  • Loaded using image_dataset_from_directory
  • Label mode: categorical
  • Batch size: 32
  • Datasets

Models Implemented

πŸ”Ή Model 1 – Basic CNN

  • 3 Conv2D layers
  • AveragePooling
  • Dense layers
  • Softmax output (100 classes)

πŸ”Ή Model 2 – CNN with Regularization

  • L2 Regularization
  • Dropout (0.30)
  • Dense layers

πŸ”Ή Model 3 – CNN with EarlyStopping + Dropout

  • Regularization
  • EarlyStopping
  • Dropout

πŸ”Ή Model 4 – Transfer Learning (MobileNetV2)

  • Pretrained on ImageNet
  • Base model frozen
  • GlobalAveragePooling
  • Dense(256) + Dropout(0.5)
  • Softmax output

Test Accuracy Comparison

Model Test Accuracy
Simple CNN 3.8%
CNN with L2 4.2%
CNN with L2 & Dropout (Early Stopping) 5.4%
MobileNetV2 89.4%

Observations

  1. Baseline CNN performed better than expected, achieving ~21% accuracy.
  2. Adding L2 Regularization and Dropout did not significantly improve test accuracy.
  3. Model 3 showed lower performance, likely due to underfitting.
  4. MobileNetV2 produced extremely low accuracy (1.8%), indicating:
    • Possible preprocessing mismatch
    • Dataset pipeline inconsistency
    • Class order mismatch between training and testing

Training Behavior Insights

  • Training accuracy was significantly higher than validation accuracy.
  • Validation loss increased during training for scratch CNN models.
  • This indicates strong overfitting.

Key Debugging Findings

  • MobileNetV2 requires preprocess_input() instead of rescaling.
  • Train and test datasets must undergo identical preprocessing.
  • Class folder order must match exactly in train and test directories.
  • Shuffle should be disabled for test dataset.

Inferences

  1. Training CNN from scratch on 100 classes is challenging.
  2. Regularization helps stabilize training but does not guarantee performance boost.
  3. Transfer learning is powerful but highly sensitive to preprocessing consistency.
  4. Dataset pipeline correctness is as important as model architecture.
  5. Overfitting was a major issue in scratch models.

Future Improvements

  • Fix MobileNetV2 preprocessing pipeline
  • Fine-tune upper layers of MobileNet
  • Use EfficientNetB0 or ResNet50
  • Apply stronger data augmentation
  • Implement learning rate scheduling
  • Perform per-class accuracy analysis

Tech Stack

  • Python
  • TensorFlow / Keras
  • MobileNetV2 (ImageNet pretrained)
  • Matplotlib
  • Scikit-learn

Conclusion

This project demonstrates:

  • Comparative analysis of CNN architectures
  • Impact of regularization
  • Importance of transfer learning
  • Critical role of preprocessing consistency in deep learning pipelines

The experiment highlights that debugging and data pipeline validation are essential components of machine learning workflows.


Author

Ayush Tandon
B.Tech – Mathematics & Computing

About

This project focuses on multi-class image classification of sports images using Deep Learning. The dataset consists of 100 different sports categories, and multiple CNN-based models were trained and evaluated to compare performance.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published