๐ค AI Summary
Existing reinforcement learningโbased post-training methods relying on scoring rubrics often depend on a single reference answer, making them ill-suited for tasks like virtual try-on that admit diverse valid outputs without a definitive ground truth. To address this, this work proposes Implicit Error Counting (IEC), a reference-free approach that constructs a calibrated reward signal by enumerating errors in model outputs and weighting them according to multidimensional severity levels. IEC introduces error enumeration into reference-agnostic reinforcement learning post-training for the first time, integrating group calibration with implicit score modeling to significantly enhance optimization stability. Evaluated on the newly introduced Mismatch-DressCode benchmark and the Cascaded Error Counting (CEC) metric, the method outperforms RaR on MDressBench and surpasses six baselines on six of eight perceptual metrics across VITON-HD and DressCode, achieving a top-1 human preference rate of 60% on the CEC metric.
๐ Abstract
Reinforcement learning with verifiable rewards (RLVR) and Rubrics as Rewards (RaR) have driven strong gains in domains with clear correctness signals and even in subjective domains by synthesizing evaluation criteria from ideal reference answers. But many real-world tasks admit multiple valid outputs and lack the single ideal answer that rubric generation depends on. We identify this reference-free setting as a gap in current post-training methods and propose Implicit Error Counting (IEC) to fill it. Instead of checking what a response gets right against a rubric, IEC enumerates what it gets wrong, applying severity-weighted scores across task-relevant axes and converting them into calibrated per-aspect rewards. We show that na\"ive explicit enumeration is too noisy for stable optimization, and that two design choices: implicit score emission and group calibration are necessary to make error counting a reliable reward. As a case study, we validate IEC on virtual try-on (VTO), a domain that is simultaneously too constrained for holistic scoring and too permissive for rubric-based evaluation: subtle garment errors are unacceptable, yet many output variations are correct. We introduce Cascaded Error Counting (CEC) as an evaluation metric, which tracks human preferences well (60% top-1 vs. 30% others), and curate Mismatch-DressCode (MDressBench), a benchmark with maximal attribute mismatch to stress-test reward designs. On MDressBench, IEC outperforms RaR across all metrics (CEC: 5.31 vs. 5.60 on flat references; 5.20 vs. 5.53 on non-flat). On VITON-HD and DressCode, IEC matches or surpasses six baselines on 6 of 8 perceptual metrics. These results suggest that when ideal answers are unavailable, counting errors provide a stronger signal than constructing rubrics.