VLDBench: A large-scale benchmark for evaluating Vision-Language Models (VLMs) and Large Language Models (LLMs) on multimodal disinformation detection.
-
Updated
Jan 19, 2026 - Python
VLDBench: A large-scale benchmark for evaluating Vision-Language Models (VLMs) and Large Language Models (LLMs) on multimodal disinformation detection.
Codes and Recipe for Compiling and Analyzing "Four Shades of Life Sciences" Data Set
Adversarial AI engine that reverse-engineers video content to detect narrative manipulation, emotional coercion, and logical fallacies. Powered by Google Gemini 3.
Source Code for the Bachelor's Project: Hybrid Small Language Models for Accurate Multimodal Disinformation and Misinformation Analysis
These notebooks analyze daily trends in online news coverage, examining news volume, topic distribution, source reliability, disinformation tactics, check-worthy claims, and visual-text alignment.
Bharat AI is a next-generation misinformation detection framework combining local LLMs and transparent reasoning. It offers multilingual verification, context awareness, and a user-friendly interface for real-time fact-checking across India.
🛡 Detect cognitive exploitation in texts, URLs, and files using pattern matching and optional LLM analysis with Seithar Cognitive Defense.
Add a description, image, and links to the disinformation-detection topic page so that developers can more easily learn about it.
To associate your repository with the disinformation-detection topic, visit your repo's landing page and select "manage topics."