Simple LLM service identification - translate IP:Port to Ollama, vLLM, LiteLLM, or 30+ other AI services in seconds
-
Updated
Feb 28, 2026 - Go
Simple LLM service identification - translate IP:Port to Ollama, vLLM, LiteLLM, or 30+ other AI services in seconds
LLM Attack Testing Toolkit is a structured methodology and mindset framework for testing Large Language Model (LLM) applications against logic abuse, prompt injection, jailbreaks, and workflow manipulation.
Security scanner for local LLMs scanning LLM vulnerabilities including jailbreaks, prompt injection, training data leakage, and adversarial abuse
🔍 Enhance local LLM security by testing for vulnerabilities like prompt injection, model inversion, and data leakage with this robust toolkit.
Add a description, image, and links to the llm-pentesting topic page so that developers can more easily learn about it.
To associate your repository with the llm-pentesting topic, visit your repo's landing page and select "manage topics."