π€ AI Summary
This work addresses a critical limitation in existing approaches to enhancing large language modelsβ reasoning capabilities through negative samples: the common practice of treating all incorrect responses uniformly, without regard to their quality. To remedy this, the authors propose Plausible Negative Samples (PNS), a novel method that emphasizes the balance between structural plausibility and factual incorrectness in negative examples for preference learning. Leveraging inverse reinforcement learning, PNS constructs a composite reward function that integrates format compliance, answer inaccuracy, reward model scores, and chain-of-thought quality to generate high-quality negative samples. Extensive experiments across seven mathematical reasoning benchmarks and three backbone models demonstrate the effectiveness of PNS, showing consistent improvements over current negative sample synthesis techniques and yielding an average gain of 2.03% over standard reinforcement learning training.
π Abstract
Learning from negative samples holds great promise for improving Large Language Model (LLM) reasoning capability, yet existing methods treat all incorrect responses as equally informative, overlooking the crucial role of sample quality. To address this, we propose Plausible Negative Samples (PNS), a method that synthesizes high-quality negative samples exhibiting expected format and structural coherence while ultimately yielding incorrect answers. PNS trains a dedicated model via reverse reinforcement learning (RL) guided by a composite reward combining format compliance, accuracy inversion, reward model assessment, and chain-of-thought evaluation, generating responses nearly indistinguishable from correct solutions. We further validate PNS as a plug-and-play data source for preference optimization across three backbone models on seven mathematical reasoning benchmarks. Results demonstrate that PNS consistently outperforms other negative sample synthesis methods, achieving an average improvement of 2.03% over RL-trained models.