Aletheia: What Makes RLVR For Code Verifiers Tick?

📅 2026-01-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of effective evaluation mechanisms for non-executable outputs in existing code generation methods, which hinders the application of reinforcement learning verifiers. To this end, we propose Aletheia, a testbed that leverages executable feedback to systematically evaluate key components of the Reinforcement Learning from Verifiable Rewards (RLVR) framework—such as intermediate reasoning traces, negative example learning, and online policy training—across models and under distribution shifts. Our findings reveal that smaller-scale verifiers rely more heavily on online policy training, whereas larger-scale verifiers benefit significantly from reasoning trace training, offering insights for streamlining the RLVR pipeline. The Aletheia platform is publicly released to support controlled and robust evaluation of code verifiers.

Technology Category

Application Category

📝 Abstract
Multi-domain thinking verifiers trained via Reinforcement Learning from Verifiable Rewards (RLVR) are a prominent fixture of the Large Language Model (LLM) post-training pipeline, owing to their ability to robustly rate and rerank model outputs. However, the adoption of such verifiers towards code generation has been comparatively sparse, with execution feedback constituting the dominant signal. Nonetheless, code verifiers remain valuable toward judging model outputs in scenarios where execution feedback is hard to obtain and are a potentially powerful addition to the code generation post-training toolbox. To this end, we create and open-source Aletheia, a controlled testbed that enables execution-grounded evaluation of code verifiers'robustness across disparate policy models and covariate shifts. We examine components of the RLVR-based verifier training recipe widely credited for its success: (1) intermediate thinking traces, (2) learning from negative samples, and (3) on-policy training. While experiments show the optimality of RLVR, we uncover important opportunities to simplify the recipe. Particularly, despite code verification exhibiting positive training- and inference-time scaling, on-policy learning stands out as the key component at small verifier sizes, and thinking-based training emerges as the most important component at larger scales.
Problem

Research questions and friction points this paper is trying to address.

code verifiers
Reinforcement Learning from Verifiable Rewards
execution feedback
robustness evaluation
LLM post-training
Innovation

Methods, ideas, or system contributions that make the work stand out.

RLVR
code verifiers
on-policy training
thinking traces
covariate shift
🔎 Similar Papers
No similar papers found.