Rethinking Multiple-Choice Questions for RLVR: Unlocking Potential via Distractor Design

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of multiple-choice questions (MCQs) in reinforcement learning with verifiable rewards (RLVR) to reward hacking—such as random guessing—which undermines model reasoning. The authors systematically investigate how answer option design influences RLVR performance and propose IDC, an iterative distractor optimization framework that leverages expert-guided structured generation and filtering to proactively construct high-quality distractors. This approach suppresses shortcut reasoning while preserving contrastive signals essential for deep reasoning. Experiments reveal that mismatched numbers of options during training and testing degrade performance, whereas strong distractors effectively mitigate random guessing. Across multiple benchmarks, IDC significantly improves both distractor quality and RLVR training efficacy, outperforming both the original MCQ format and existing methods that reformulate questions as open-ended prompts.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) significantly enhances the reasoning capabilities of Large Language Models. When applied to RLVR, Multiple-Choice Questions (MCQs) offer a scalable source of verifiable data but risk inducing reward hacking, where models shortcut reasoning via random guessing or simple elimination. Current approaches often mitigate this by converting MCQs to open-ended formats, thereby discarding the contrastive signal provided by expert-designed distractors. In this work, we systematically investigate the impact of option design on RLVR. Our analysis highlights two primary insights: (1) Mismatches in option counts between training and testing degrade performance. (2) Strong distractors effectively mitigate random guessing, enabling effective RLVR training even with 2-way questions. Motivated by these findings, we propose Iterative Distractor Curation (IDC), a framework that actively constructs high-quality distractors to block elimination shortcuts and promote deep reasoning. Experiments on various benchmarks demonstrate that our method effectively enhances distractor quality and yields significant gains in RLVR training compared to the original data.
Problem

Research questions and friction points this paper is trying to address.

Multiple-Choice Questions
Reinforcement Learning with Verifiable Rewards
Reward Hacking
Distractor Design
Reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative Distractor Curation
Reinforcement Learning with Verifiable Rewards
Multiple-Choice Questions
Distractor Design
Reward Hacking
🔎 Similar Papers
No similar papers found.
X
Xu Guo
Shanghai AI Laboratory
Q
Qiming Ge
Shanghai AI Laboratory
J
Jian Tong
Shanghai AI Laboratory
K
Kedi Chen
Shanghai AI Laboratory
Jin Zhang
Jin Zhang
Shanghai AI Laboratory
X
Xiaogui Yang
Shanghai AI Laboratory
X
Xuan Gao
Shanghai AI Laboratory
H
Haijun Lv
Shanghai AI Laboratory
Z
Zhihui Lu
Fudan University
Yicheng Zou
Yicheng Zou
Shanghai AI Laboratory
Large Language Model
Qipeng Guo
Qipeng Guo
Fudan University