Inspired by spy and detective movies, SentinelIQ is an AI-powered forensics analysis platform for immersive crime scene investigation. Users can upload evidence files, and the tool performs a comprehensive analysis using advanced machine learning techniques. It combines a modern React frontend with a Python FastAPI backend to provide real-time forensic analysis across multiple evidence types.
It is a Claude Desktop MCP server that can help to analyze crime scenes and get forensic reports from evidences.
- Crime Scene Analysis - Uses Ollama llava for detailed visual analysis
- Fingerprint Analysis - Predicts blood group from fingerprint images using TensorFlow
- Audio Transcription - Transcribes audio evidence using Faster Whisper
- Suspect Identification - Matches suspects against a global database
- Comprehensive Reporting - Generates integrated forensic reports combining all analyses
- React 19.2.4
- TypeScript 5.9.3
- Vite 8.0.1 (build tool)
- TailwindCSS 4.2.2
- Framer Motion 12.38.0 (animations)
- Lucide React 0.577.0 (icons)
- ESLint 9.39.4
- FastAPI (web framework)
- Uvicorn (ASGI server)
- TensorFlow (neural networks)
- PyTorch & Torchvision (deep learning)
- Faster Whisper (audio transcription)
- Transformers (LLM models)
- Scikit-learn & SciPy (ML utilities)
- Pillow (image processing)
ai-forensics-react/
├── src/ # React frontend source
│ ├── main.tsx # Entry point
│ ├── App.tsx # Main app component with state management
│ ├── App.css
│ ├── index.css
│ ├── components/
│ │ ├── HeroSection.tsx # Upload interface
│ │ ├── AnalyzingSection.tsx # Live analysis progress
│ │ └── ReportSection.tsx # Results display
│ └── assets/
├── backend/ # Python backend
│ ├── app.py # FastAPI server (port 5500)
│ ├── image-a.py # Crime scene & report analysis
│ ├── fingerprint.py # Blood group prediction
│ ├── audio_agent.py # Audio transcription
│ ├── suspect-identifying.py # Suspect matching
│ ├── test_user_code.py
│ ├── requirements.txt # Python dependencies
│ ├── fingerprint_bloodgroup_classifier_attention.h5 # Trained model
│ └── venv/ # Virtual environment (to be created)
├── public/ # Static assets
├── package.json # Frontend dependencies
├── vite.config.ts # Vite configuration
├── tsconfig.json # TypeScript configuration
└── README.md
- Node.js 16+ and npm
- Python 3.8+
- Git
-
Navigate to the project root:
cd ai-forensics-react -
Install dependencies:
npm install
-
Navigate to backend directory:
cd backend -
Create a Python virtual environment:
python -m venv venv
-
Activate the virtual environment:
Windows (PowerShell):
.\\venv\\Scripts\\Activate.ps1
Windows (CMD):
.\\venv\\Scripts\\activate.bat
macOS/Linux:
source venv/bin/activate -
Install Python dependencies:
pip install -r requirements.txt
Terminal 1 - Start the FastAPI backend:
cd backend
python app.py(Backend runs on http://localhost:5500)
Terminal 2 - Start the React frontend:
npm run dev(Frontend runs on http://localhost:5173)
- Access the application:
- Open http://localhost:5173 in your browser
npm run dev # Start Vite development server
npm run build # Build for production (TypeScript + Vite)
npm run lint # Run ESLint
npm run preview # Preview production buildThis project integrates with Claude Desktop via an MCP (Model Context Protocol) server to perform advanced AI-powered forensic analysis.
Prerequisites:
- Claude Desktop application must be installed.
Configuration:
-
Open the Claude Desktop application's configuration file (e.g.,
settings.json). -
Locate the section for MCP server configurations.
-
Add or modify the entry to point to the project's root directory. This allows Claude to access the project's context for analysis.
Example JSON configuration:
{ "mcp.server.paths": [ "C:\\binary-frontend\\Binary2\\ai-forensics-react" ] }Note: The exact JSON key might differ. Please refer to the Claude Desktop documentation for the correct key.
-
Save the configuration file and restart Claude Desktop.
Once configured, you can leverage Claude's capabilities to interact with and analyze the project's data.