🤖 AI Summary
This work addresses the vulnerability of multiple-choice questions (MCQs) in reinforcement learning with verifiable rewards (RLVR) to reward hacking—such as random guessing—which undermines model reasoning. The authors systematically investigate how answer option design influences RLVR performance and propose IDC, an iterative distractor optimization framework that leverages expert-guided structured generation and filtering to proactively construct high-quality distractors. This approach suppresses shortcut reasoning while preserving contrastive signals essential for deep reasoning. Experiments reveal that mismatched numbers of options during training and testing degrade performance, whereas strong distractors effectively mitigate random guessing. Across multiple benchmarks, IDC significantly improves both distractor quality and RLVR training efficacy, outperforming both the original MCQ format and existing methods that reformulate questions as open-ended prompts.
📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) significantly enhances the reasoning capabilities of Large Language Models. When applied to RLVR, Multiple-Choice Questions (MCQs) offer a scalable source of verifiable data but risk inducing reward hacking, where models shortcut reasoning via random guessing or simple elimination. Current approaches often mitigate this by converting MCQs to open-ended formats, thereby discarding the contrastive signal provided by expert-designed distractors. In this work, we systematically investigate the impact of option design on RLVR. Our analysis highlights two primary insights: (1) Mismatches in option counts between training and testing degrade performance. (2) Strong distractors effectively mitigate random guessing, enabling effective RLVR training even with 2-way questions. Motivated by these findings, we propose Iterative Distractor Curation (IDC), a framework that actively constructs high-quality distractors to block elimination shortcuts and promote deep reasoning. Experiments on various benchmarks demonstrate that our method effectively enhances distractor quality and yields significant gains in RLVR training compared to the original data.