LLM Attack Testing Toolkit is a structured methodology and mindset framework for testing Large Language Model (LLM) applications against logic abuse, prompt injection, jailbreaks, and workflow manipulation.
offensive-security security-research prompt-injection llm-security agent-security ai-red-teaming adversarial-ai rag-security llm-pentesting tool-injection jailbreak-testing logic-abuse ai-workflow-testing context-leakage ai-application-security
-
Updated
Feb 27, 2026