Official codebase for the paper
[1] Provable concept learning for interpretable predictions using variational inference,
Taeb A., Ruggeri N., Schnuck C., Yang F.
(Arxiv preprint)
We present CLAP, an inherently interpretable prediction model.
Its VAE-based architecture allows the discovery and disentanglement of relevant concepts, encoded in the latent space,
which are utilized by a simple, concurrently trained classifier.
The final architecture allows to exploit provably interpretable, predictive and minimal concepts to assist practitioners
in making informed predictions.
To start training CLAP on a dataset:
- download the desired dataset and place it in the
./datadirectory. Alternatively, change the default data directory specified atsrc.data.utils.DATA_DIR - run the terminal command. The datasets available are
MPI,Shapes3D,SmallNORB,ChestXRay,PlantVillage[1].
For example, to train CLAP on the MPI dataset, the terminal command is
python main.py --dataset MPIMore options for training, e.g. latent space dimension and regularization parameters, are specified inside main.py.
