Single-positive Multi-label Learning with Label Cardinality [TMLR 2025] [Paper]
This is the official repository for the paper Single-positive Multi-label Learning with Label Cardinality published at TMLR 2025.
Follow the steps below to prepare datasets and run experiments with our implementation.
We use the same benchmark datasets as prior work. You can follow the dataset download, formatting, and single-positive label generation instructions from this repository
For convenience, the Python scripts required for preprocessing are included in our preproc folder (copied from the EM repository).
Once the datasets are prepared, follow the instructions below to run our methods.
To train and evaluate a model, run:
python main.py -d {DATASET} -l {LOSS} -g {GPU} -s {PYTORCH-SEED} -m {IC-MODE}-
{dataset}: Dataset to use.default:
pascal| Options:pascal,coco,nuswide, orcub -
{loss}: Loss function / method for training.default:
ic_loss| Options:bce,an,ic_loss, orcs_loss -
{gpu}: GPU index.default:
0 -
{pytorch_seed}: PyTorch random seed.default:
0 -
{ic_mode}: Mode for determining instance cardinality (only used whenloss=ic_loss).default:
true| Options:trueorestimate
Train and evaluate on the PASCAL (VOC) dataset using ic_loss with true instance cardinalities:
python main.py -d pascal -l ic_loss -m trueIf you find our work useful in your research, please consider citing our paper:
@article{
gharib2025singlepositive,
title={Single-positive Multi-label Learning with Label Cardinality},
author={Shayan Gharib and Pierre-Alexandre Murena and Arto Klami},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2025},
url={https://openreview.net/forum?id=XEPPXH2nKu},
note={Expert Certification}
}Our code is mainly built upon EM and ROLE, which are also baselines in our experiments.
We thank the authors of these works for open-source implementations, which facilitated the implementation of our SPMLL methods and ensure fair comparisons.