Skip to content

dinraj910/AI-Real-time-Face-Emotion-Detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🎯 Overview

A cutting-edge AI-powered emotion detection system that brings human emotion understanding to your fingertips. This intelligent application combines the power of deep learning, computer vision, and modern web technologies to create an interactive platform for real-time facial emotion analysis.

🤔 What is This?

A state-of-the-art deep learning web application that detects and classifies human emotions from facial expressions in real-time. Built with advanced Convolutional Neural Networks and trained on the comprehensive FER2013 dataset containing over 35,000 labeled facial images, this application achieves impressive accuracy in recognizing 7 distinct emotional states. The system processes images through sophisticated computer vision algorithms, identifying faces and analyzing subtle facial features to determine the predominant emotion with confidence scores.

Perfect for: Psychology research, customer sentiment analysis, interactive applications, educational demonstrations, mental health monitoring, user experience testing, and showcasing AI/ML expertise in professional portfolios.

💡 Why This Project?

  • 🧠 Learn Deep Learning: Hands-on implementation of CNN architecture demonstrating practical neural network design, training, and deployment workflows
  • 🎭 Real-World Application: Emotion detection powers critical applications in healthcare, security, customer service, marketing analytics, and human-computer interaction
  • Modern Tech Stack: Built with industry-standard tools including TensorFlow, Streamlit, and OpenCV, showcasing full-stack ML engineering capabilities
  • 🚀 Interactive UI: Intuitive Streamlit interface enabling both technical and non-technical users to experience AI-powered emotion recognition instantly
  • 📊 Production Ready: Robust model trained and validated on 35,887 diverse facial images ensuring reliable performance across different demographics
  • 🔬 Research Foundation: Grounded in academic research on facial expression recognition with proven methodologies from computer vision literature

✨ Features

Feature Description Status
🎭 7 Emotion Classes Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral
📸 Real-Time Detection Live camera feed emotion recognition
🖼️ Image Upload Analyze emotions from uploaded photos
👥 Multiple Faces Detect emotions in multiple faces simultaneously
📊 Confidence Scores Display prediction confidence percentages
🎨 Visual Bounding Boxes Highlight detected faces with annotations
Fast Inference Optimized model for quick predictions
🌐 Web Interface User-friendly Streamlit dashboard
💾 Pre-trained Model Ready-to-use trained CNN model included
🔄 Real-time Processing Instant emotion feedback

🏗️ Architecture

🔍 Click to view System Architecture
┌─────────────────────────────────────────────────────────────────────┐
│                        EMOTION DETECTION SYSTEM                      │
└─────────────────────────────────────────────────────────────────────┘

┌──────────────┐         ┌──────────────┐         ┌──────────────┐
│   INPUT      │         │ PROCESSING   │         │   OUTPUT     │
│   LAYER      │────────▶│   PIPELINE   │────────▶│   LAYER      │
└──────────────┘         └──────────────┘         └──────────────┘
      │                         │                         │
      │                         │                         │
      ▼                         ▼                         ▼
┌─────────┐              ┌─────────┐              ┌─────────┐
│ Camera  │              │  Face   │              │Emotion  │
│   or    │─────────────▶│Detection│─────────────▶│Label +  │
│ Upload  │              │(Haar)   │              │Confidence│
└─────────┘              └─────────┘              └─────────┘
                               │
                               ▼
                        ┌─────────────┐
                        │  Grayscale  │
                        │ Conversion  │
                        └─────────────┘
                               │
                               ▼
                        ┌─────────────┐
                        │   Resize    │
                        │  48x48 px   │
                        └─────────────┘
                               │
                               ▼
                        ┌─────────────┐
                        │ Normalize   │
                        │   [0, 1]    │
                        └─────────────┘
                               │
                               ▼
         ┌─────────────────────────────────────────┐
         │        CNN MODEL ARCHITECTURE            │
         ├─────────────────────────────────────────┤
         │  Conv2D (32) → ReLU → MaxPool           │
         │  Conv2D (64) → ReLU → MaxPool           │
         │  Conv2D (128) → ReLU → MaxPool          │
         │  Flatten                                 │
         │  Dense (128) → ReLU → Dropout           │
         │  Dense (7) → Softmax                    │
         └─────────────────────────────────────────┘
                               │
                               ▼
                    ┌──────────────────┐
                    │  7 Emotion Classes│
                    │   Angry           │
                    │   Disgust         │
                    │   Fear            │
                    │   Happy           │
                    │   Sad             │
                    │   Surprise        │
                    │   Neutral         │
                    └──────────────────┘
🧠 Click to view CNN Model Details

Deep Learning Architecture

Model Type: Convolutional Neural Network (CNN)
Framework: TensorFlow/Keras
Dataset: FER2013 (Facial Expression Recognition 2013)
Training Images: 35,887 grayscale images
Input Size: 48×48 pixels
Output Classes: 7 emotions

Layer Configuration:

  1. Convolutional Layers:

    • 3 Conv2D layers with increasing filters (32 → 64 → 128)
    • ReLU activation for non-linearity
    • MaxPooling for spatial dimension reduction
  2. Regularization:

    • Dropout layers to prevent overfitting
    • Batch normalization for stable training
  3. Dense Layers:

    • Fully connected layer (128 neurons)
    • Output layer with Softmax activation
  4. Optimization:

    • Adam optimizer
    • Categorical cross-entropy loss

📁 Project Structure

📦 Emotion-Detection-CNN
┣ 📂 models/                    # Trained model files
┃ ┣ 📄 emotion_fer2013.keras    # Main trained model
┃ ┣ 📄 emotion_fer2013.h5       # Alternative model format
┃ ┗ 📄 best_model.h5           # Best performing checkpoint
┣ 📂 utils/                     # Utility modules
┃ ┣ 📄 face_detector.py        # Face detection using Haar Cascade
┃ ┗ 📄 predictor.py            # Emotion prediction logic
┣ 📂 notebooks/                 # Jupyter notebooks
┃ ┣ 📓 Emotion_Detection.ipynb # Main training notebook
┃ ┣ 📓 Emotion_Detection_copy.ipynb
┃ ┗ 📄 emotion_detection.py    # Python script version
┣ 📂 screenshots/               # Application screenshots
┃ ┣ 🖼️ 1.png
┃ ┣ 🖼️ 2.png
┃ ┣ 🖼️ 3.png
┃ ┣ 🖼️ 4.png
┃ ┣ 🖼️ 5.png
┃ ┣ 🖼️ 6.png
┃ ┗ 🖼️ 7.png
┣ 📂 assets/                    # Static assets
┣ 📄 app.py                     # Main Streamlit application
┣ 📄 requirements.txt           # Python dependencies
┗ 📄 README.md                  # Project documentation

🚀 Quick Start

📋 Prerequisites

Before you begin, ensure you have the following installed:

  • Python 3.8+ (Recommended: Python 3.12)
  • pip (Python package manager)
  • Webcam (Optional, for live camera feature)
  • Git (For cloning the repository)

⚙️ Installation

Follow these steps to get the project running locally:

Step 1: Clone the Repository
git clone https://github.com/yourusername/emotion-detection-cnn.git
cd emotion-detection-cnn
Step 2: Create Virtual Environment (Recommended)
# Windows
python -m venv venv
venv\Scripts\activate

# macOS/Linux
python3 -m venv venv
source venv/bin/activate
Step 3: Install Dependencies
pip install -r requirements.txt

Dependencies include:

  • streamlit - Web application framework
  • tensorflow - Deep learning framework
  • opencv-python - Computer vision library
  • numpy - Numerical computing
  • pillow - Image processing
Step 4: Run the Application
streamlit run app.py

The app will automatically open in your browser at http://localhost:8501

🎮 Usage

  1. Choose Input Method:

    • 📤 Upload Image: Click "Browse files" to upload a photo
    • 📸 Use Camera: Click "Take a photo" for real-time detection
  2. View Results:

    • See detected faces with bounding boxes
    • View emotion label and confidence score
    • Analyze multiple faces in one image
  3. Supported Emotions:

    • 😠 Angry
    • 🤢 Disgust
    • 😨 Fear
    • 😊 Happy
    • 😢 Sad
    • 😲 Surprise
    • 😐 Neutral

📸 Screenshots

🎨 Application Interface

Home Screen
🏠 Home Screen
Clean and intuitive interface
Upload Feature
📤 Upload Feature
Easy drag-and-drop upload
Happy Detection
😊 Happy Detection
Real-time emotion recognition
Sad Detection
😢 Sad Detection
Accurate emotion classification
Multiple Faces
👥 Multiple Face Detection
Handles multiple faces simultaneously
Camera Feature
📸 Live Camera
Real-time camera feed processing
Results Display
📊 Results Display
Confidence scores and bounding boxes

⚙️ Configuration

Configuration Value Description
MODEL_PATH models/emotion_fer2013.keras Path to trained model
IMG_SIZE 48 Input image dimensions (48×48)
EMOTIONS 7 classes Number of emotion categories
PAGE_LAYOUT centered Streamlit layout mode
FACE_DETECTOR Haar Cascade Face detection algorithm
COLOR_MODE Grayscale Image processing format
NORMALIZATION [0, 1] Pixel value range

🔧 Environment Variables

No environment variables required! The app works out of the box.


🛠️ Tech Stack

Core Technologies

Python
Python
TensorFlow
TensorFlow
Keras
Keras
Streamlit
Streamlit
OpenCV
OpenCV
NumPy
NumPy

Technology Stack Breakdown

Layer Technology Purpose
Frontend Streamlit Web UI & User Interaction
Backend Python Core Application Logic
Deep Learning TensorFlow/Keras Model Training & Inference
Computer Vision OpenCV Face Detection & Image Processing
Numerical Computing NumPy Array Operations & Math
Image Processing Pillow Image Manipulation
Model Format .keras / .h5 Serialized Neural Network
Version Control Git Source Code Management

📊 Performance

🎯 Model Metrics

Metric Value Details
Training Accuracy ~65-70% On FER2013 dataset
Validation Accuracy ~60-65% Generalization performance
Inference Time <100ms Per image prediction
Model Size ~5-10 MB Compressed Keras format
Input Resolution 48×48 Grayscale images
Parameters ~2-3M Trainable parameters
Dataset Size 35,887 Training images
Emotion Classes 7 Output categories

⚡ Speed Benchmarks

Operation Time Hardware
Model Loading ~2-3s CPU (First Load)
Face Detection ~50ms Haar Cascade
Emotion Prediction ~30ms TensorFlow CPU
Total Pipeline ~100ms End-to-End

📈 Class Performance

Emotion Precision Characteristics
😊 Happy High Most recognizable
😢 Sad Medium Good separation
😲 Surprise High Distinct features
😠 Angry Medium Similar to disgust
😐 Neutral Medium Baseline state
😨 Fear Low-Medium Complex expression
🤢 Disgust Low-Medium Least samples

🗺️ Roadmap

🎯 Development Timeline

Version Status Features Timeline
v1.0 ✅ Completed Basic emotion detection with image upload Released
v1.1 ✅ Completed Live camera integration & real-time detection Released
v1.2 🔄 In Progress Model optimization & accuracy improvements Q1 2026
v2.0 📋 Planned Video upload support & continuous tracking Q2 2026
v2.1 📋 Planned Analytics dashboard & emotion history Q3 2026
v3.0 📋 Future Multi-language support & cloud deployment Q4 2026

🚀 Upcoming Features

📋 Version 1.2 - Model Enhancement (In Progress)
  • Improve model accuracy with data augmentation
  • Implement model quantization for faster inference
  • Add ensemble model support
  • Fine-tune hyperparameters
  • Deploy optimized TFLite model
📋 Version 2.0 - Advanced Features (Planned)
  • Video Upload Support: Analyze emotions in video files
  • Real-time Webcam Stream: Continuous emotion tracking
  • Emotion Timeline: Track emotion changes over time
  • Multi-person Analytics: Group emotion analysis
  • Export Results: CSV/JSON report generation
  • Dark Mode UI: Enhanced visual experience
📋 Version 2.1 - Analytics Dashboard (Future)
  • Emotion distribution charts
  • Historical data visualization
  • Comparative analysis tools
  • Custom report generation
  • API endpoint creation
📋 Version 3.0 - Enterprise Features (Long-term)
  • Multi-language support
  • User authentication system
  • Cloud deployment (AWS/Azure/GCP)
  • REST API for integration
  • Database storage for history
  • Advanced emotion sub-categories
  • Age and gender detection
  • Facial landmark detection

💡 Contribution Ideas

Want to contribute? Here are some ideas:

  • 🎨 Improve UI/UX design
  • 📊 Add data visualization features
  • 🧪 Experiment with different model architectures
  • 🌍 Add internationalization
  • 📝 Improve documentation
  • 🐛 Bug fixes and optimization

🤝 Contributing

Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated!

How to Contribute:

  1. Fork the Project

    git clone https://github.com/dinraj910/emotion-detection-cnn.git
  2. Create your Feature Branch

    git checkout -b feature/AmazingFeature
  3. Commit your Changes

    git commit -m 'Add some AmazingFeature'
  4. Push to the Branch

    git push origin feature/AmazingFeature
  5. Open a Pull Request

📝 Contribution Guidelines

  • Write clear, descriptive commit messages
  • Follow PEP 8 style guide for Python code
  • Add comments to explain complex logic
  • Update documentation for new features
  • Test your changes thoroughly
  • Ensure backward compatibility

🐛 Report Bugs

Found a bug? Please open an issue with:

  • Detailed description
  • Steps to reproduce
  • Expected vs actual behavior
  • Screenshots if applicable
  • Environment details

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

MIT License

Copyright (c) 2026 Emotion Detection CNN

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

👨‍💻 Author

DINRAJ K DINESH 👋

AI/ML Engineer | Deep Learning Enthusiast | Python Developer

I'm passionate about building intelligent systems that solve real-world problems using cutting-edge artificial intelligence and machine learning technologies. This project demonstrates my comprehensive expertise in:

  • 🧠 Deep Learning & Neural Networks - Designing and implementing CNN architectures
  • 🐍 Python Programming - Writing clean, efficient, and maintainable code
  • 🎨 Web Application Development - Creating intuitive user interfaces with Streamlit
  • 📊 Computer Vision - Processing and analyzing visual data with OpenCV
  • 🚀 End-to-End ML Deployment - From model training to production deployment

GitHub LinkedIn Portfolio Email Twitter


🙏 Acknowledgments

This project wouldn't be possible without:

  • 📚 FER2013 Dataset: Facial Expression Recognition Challenge dataset
  • 🧠 TensorFlow Team: For the amazing deep learning framework
  • 🎨 Streamlit: For the incredible web app framework
  • 👀 OpenCV: For computer vision capabilities
  • 🌟 Open Source Community: For inspiration and resources
  • 📖 Research Papers: On emotion recognition and CNN architectures
  • 💡 Kaggle Community: For tutorials and insights
  • 🎓 Online Courses: Deep Learning specializations

📚 References & Inspiration


⭐ Show Your Support

If you found this project helpful or interesting:

⭐ Star this repository

It helps others discover this project and motivates me to create more!

Star History Chart

📢 Share this project

Twitter LinkedIn Reddit

💖 Support the Project

Buy Me A Coffee

📞 Contact & Support

Need Help? Have Questions?

💬 Open an Issue: Create New Issue
📧 Email: dinrajdinesh564@gmail.com
💼 LinkedIn: Connect with me
🐦 Twitter: @dinraj910

📚 Documentation & Resources


🌟 Featured In

Platform Status
🚀 Product Hunt Coming Soon
📰 Medium Blog Coming Soon
🎬 YouTube Demo Coming Soon
📱 LinkedIn Post Coming Soon

📈 Project Stats

GitHub Stars GitHub Forks GitHub Watchers GitHub Issues GitHub Pull Requests GitHub Contributors GitHub Last Commit GitHub Repo Size Lines of Code


🏆 Project Achievements

🎯 Real-time emotion detection
🧠 7 emotion classifications
📸 Live camera integration
⚡ <100ms inference time
🎨 Modern UI/UX
📦 Production-ready deployment
📚 Comprehensive documentation
✅ 100% Open Source


Made with ❤️ and 🧠 using Python & Deep Learning

Footer

About

Built with advanced Convolutional Neural Networks and trained on the comprehensive FER2013 dataset containing over 35,000 labeled facial images, this application achieves impressive accuracy in recognizing 7 distinct emotional states.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors