The Automated MOM project demonstrates the power of Natural Language Processing (NLP) and Large Language Models (LLMs) in automating and simplifying the creation and sharing of Minutes of Meeting (MoM). 🎯 This tool transforms audio or video recordings of meetings into detailed and actionable summaries, saving time and effort for organizations.
Not only does it automate transcription, but it also provides editable MoM, giving users the flexibility to refine and finalize summaries as needed. ✨
- Converts recorded meeting files into accurate text conversations.
- Supports a wide range of audio and video formats for seamless integration.
- Processes transcriptions into concise and detailed MoM.
- Provides editable summaries for customization before sharing or archiving.
- Simplifies the process of generating and delegating MoM emails.
- Reduces manual intervention, saving time and minimizing errors.
The project aspires to evolve into a real-time meeting assistant capable of:
- Attending live meetings on behalf of the user. 🤖
- Generating real-time transcriptions and summaries. ⏱️
- Seamlessly integrating with organizational workflows for automated meeting management. 📊
- Programming Language:
- Python
- HTML, CSS, JavaScript
- Key Libraries/Frameworks:
- Whisper: For high-quality transcription. 🎧
- PyTorch: Supports Whisper model implementation. 🔥
- Flask: For web application development. 🌐
- FFmpeg: Enables processing of audio/video files. 🎥
- NLP frameworks: For text processing and summarization. 🧠
- Development Tools: PyCharm, VS Code, GitHub for version control.
Follow these steps to set up and run the project on your local machine.
Before starting, ensure the following are installed on your system:
- Python 3.11 or later
- pip (Python package installer)
- FFmpeg (required for audio processing by Whisper)
- Download and install FFmpeg.
- Add the downloaded
ffmpeg.exeto your PATH environment variable.
sudo apt update
sudo apt install ffmpegbrew install ffmpeg-
Fork the repository
You can fork it by clicking the "Fork" button on the top right of the page.
-
Clone the Repository:
git clone https://github.com/hpriyankaa/Automated-MoM.git
-
Navigate to the Project Directory:
cd automated-mom -
Install Dependencies:
pip install -r requirements.txt
-
Set Up Environment Variables: Create a
.envfile in the project directory and add the following:GROQ_API_KEY=your_groq_api_key
Replace your_groq_api_key with your actual Groq API key for quicker summarization of the transcription.
- Verify Installation:
Run the following command to verify all dependencies:
python -m pip list
Ensure the following packages are listed:
openai-whisperpython-docxpython-dotenvllama-indexhuggingface-hubnest-asynciotorchffmpeg-python
- Upload an audio or video file to the tool. 🎵
- The tool transcribes the recording into text conversations. 📄
- Review and edit the transcription (if needed). ✏️
- Generate a summarized MoM based on the transcribed text. 📋
- Customize and export the MoM in your preferred format. 💾
- Real-Time Transcription: Live transcription during meetings. 🕒
- Cloud Integration: Save and access transcriptions/MoM from the cloud. ☁️
- Multilingual Support: Transcribe and summarize meetings in multiple languages. 🌍
- AI-Powered Insights: Highlight key decisions, action items, and deadlines from conversations. 🎯
We welcome contributions from the community! 🧑💻
To contribute:
- Fork the given repository.
- Create a new branch:
git checkout -b feature-name
- Commit your changes and push:
git commit -m "Added feature-name" git push origin feature-name - Submit a pull request. ✅
This project is licensed under the MIT License. See the LICENSE file for details.
For questions, feedback, or support, reach out: