Skip to content
#

multimodal-transformers

Here are 2 public repositories matching this topic...

A transformer-based system that generates time-synchronized captions, speaker-attributed transcripts, and abstractive summaries from videos by integrating audio and visual modalities. It leverages CLIP and Whisper embeddings with cross-attention fusion and T5-based generation to produce accurate, context-aware outputs..

  • Updated Dec 19, 2025
  • Python

Improve this page

Add a description, image, and links to the multimodal-transformers topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the multimodal-transformers topic, visit your repo's landing page and select "manage topics."

Learn more