Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a dominant paradigm for enhancing Large Language Models (LLMs) reasoning, yet its reliance on external verifiers limits its scalability. Recent findings suggest that RLVR primarily functions by eliciting latent capabilities, motivating the development of verifier-free algorithms. However, in such settings, standard methods like Group Relative Policy Optimization (GRPO) face a critical challenge: destructive gradient variance that often leads to training collapse.
To address this issue, we introduce Verifier-Independent Curriculum Reinforcement Learning (VI-CuRL), a framework that leverages the model's intrinsic confidence to construct a curriculum independent from external verifiers. By prioritizing high-confidence samples, VI-CuRL effectively manages the bias-variance trade-off, specifically targeting the reduction of action and problem variance. We provide a rigorous theoretical analysis, proving that our estimator guarantees asymptotic unbiasedness. Empirically, VI-CuRL promotes stability and consistently outperforms verifier-independent baselines across six challenging benchmarks.
VI-CuRL addresses the high instability of verifier-independent RL by introducing a dynamic, intrinsic curriculum. Instead of treating all training samples equally or relying on external verifiers, VI-CuRL filters samples based on the model's intrinsic confidence (defined as length-normalized negative entropy).
graph TD
A["Start Training Step"] --> B{"Generate Samples"}
B --> C["Compute Intrinsic Confidence"]
C --> D["Calculate Dynamic Threshold τ"]
D --> E["Filter Low-Confidence Samples"]
E --> F["Retain Top-(1-β) Samples"]
F --> G["PPO Update on Filtered Batch"]
G --> H["Anneal Retention Rate β"]
H --> A
-
Confidence-Based Filtering: We employ a dynamic quantile threshold
$\tau_t$ to select the top$(1-\beta_t)$ fraction of samples with the highest confidence. -
Curriculum Schedule: The retention rate
$\beta_t$ starts small (focusing on "easy", high-confidence samples) and anneals to 1 (full distribution) over the course of training. - Variance Reduction: By focusing on high-confidence samples, the algorithm reduces both Action Variance and Problem Variance, preventing the destructive noise that typically causes model collapse in verifier-free settings.
This codebase is built upon the verl framework.
-
Clone the repository:
git clone https://github.com/caixq1996/VI-CuRL.git cd VI-CuRL -
Install dependencies:
pip install -r requirements.txt
The main training scripts are located in the examples/ directory.
We provide a simplified, English-commented demo script examples/run_vicurl_demo.sh to get started quickly.
# Make sure to install dependencies and activate environment
bash examples/run_vicurl_demo.sh \
--exp-name my_demo_runThis script includes minimal configuration for VI-CuRL with Verifier-Free (Majority Vote) mode enabled by default.
| Argument | Type | Description |
|---|---|---|
--use-curl |
bool |
Enable or disable the VI-CuRL curriculum. |
--verifier-free |
bool |
Switch between verifier-independent and oracle-verifier modes. |
--vf-mode |
str |
Intrinsic reward mode (majority_vote or entropy). |
--model |
str |
Specify the base model (default: Qwen/Qwen2.5-math-1.5B). |
If you find this work useful in your research, please consider citing:
@article{cai2026vicurl,
title={VI-CuRL: Stabilizing Verifier-Independent RL Reasoning via Confidence-Guided Variance Reduction},
author={Cai, Xin-Qiang and Sugiyama, Masashi},
journal={arXiv preprint arXiv:2602.12579},
year={2026}
}