Skip to content
#

perturbation-testing

Here is 1 public repository matching this topic...

yuragi — LLM Confidence Fragility Analyzer. Perturbation-driven hallucination detection with workshop-grade real benchmarks (TruthfulQA n=412 ensemble AUC 0.73, TriviaQA n=200 confidence-inversion AUC 0.75).

  • Updated May 6, 2026
  • Python

Improve this page

Add a description, image, and links to the perturbation-testing topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the perturbation-testing topic, visit your repo's landing page and select "manage topics."

Learn more