π€ AI Summary
This work addresses the challenge of modeling ordinal dependencies in ordinal regression and ranking tasks by proposing a novel reinforcement learning framework that integrates regression and Learning-to-Rank. It introduces reinforcement learning to ordinal ranking for the first time, featuring a unified objective function and a rank-aware, verifiable reward mechanism that enables joint optimization of both tasks. To enhance policy exploration, the framework incorporates a Response Mutation Operation (RMO). Experimental results on three benchmark datasets demonstrate significant improvements in both ranking accuracy and regression precision, substantiating the methodβs effectiveness and innovation.
π Abstract
Ordinal regression and ranking are challenging due to inherent ordinal dependencies that conventional methods struggle to model. We propose Ranking-Aware Reinforcement Learning (RARL), a novel RL framework that explicitly learns these relationships. At its core, RARL features a unified objective that synergistically integrates regression and Learning-to-Rank (L2R), enabling mutual improvement between the two tasks. This is driven by a ranking-aware verifiable reward that jointly assesses regression precision and ranking accuracy, facilitating direct model updates via policy optimization. To further enhance training, we introduce Response Mutation Operations (RMO), which inject controlled noise to improve exploration and prevent stagnation at saddle points. The effectiveness of RARL is validated through extensive experiments on three distinct benchmarks.