yuragi — LLM Confidence Fragility Analyzer. Perturbation-driven hallucination detection with workshop-grade real benchmarks (TruthfulQA n=412 ensemble AUC 0.73, TriviaQA n=200 confidence-inversion AUC 0.75).
python nlp cli machine-learning evaluation stress-testing psychology uncertainty-quantification ai-safety confidence explainability model-testing confidence-calibration llm prompt-engineering llm-evaluation hallucination-detection perturbation-testing
-
Updated
May 6, 2026 - Python