Not All Negative Samples Are Equal: LLMs Learn Better from Plausible Reasoning

πŸ“… 2026-02-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses a critical limitation in existing approaches to enhancing large language models’ reasoning capabilities through negative samples: the common practice of treating all incorrect responses uniformly, without regard to their quality. To remedy this, the authors propose Plausible Negative Samples (PNS), a novel method that emphasizes the balance between structural plausibility and factual incorrectness in negative examples for preference learning. Leveraging inverse reinforcement learning, PNS constructs a composite reward function that integrates format compliance, answer inaccuracy, reward model scores, and chain-of-thought quality to generate high-quality negative samples. Extensive experiments across seven mathematical reasoning benchmarks and three backbone models demonstrate the effectiveness of PNS, showing consistent improvements over current negative sample synthesis techniques and yielding an average gain of 2.03% over standard reinforcement learning training.

Technology Category

Application Category

πŸ“ Abstract
Learning from negative samples holds great promise for improving Large Language Model (LLM) reasoning capability, yet existing methods treat all incorrect responses as equally informative, overlooking the crucial role of sample quality. To address this, we propose Plausible Negative Samples (PNS), a method that synthesizes high-quality negative samples exhibiting expected format and structural coherence while ultimately yielding incorrect answers. PNS trains a dedicated model via reverse reinforcement learning (RL) guided by a composite reward combining format compliance, accuracy inversion, reward model assessment, and chain-of-thought evaluation, generating responses nearly indistinguishable from correct solutions. We further validate PNS as a plug-and-play data source for preference optimization across three backbone models on seven mathematical reasoning benchmarks. Results demonstrate that PNS consistently outperforms other negative sample synthesis methods, achieving an average improvement of 2.03% over RL-trained models.
Problem

Research questions and friction points this paper is trying to address.

negative samples
Large Language Models
reasoning capability
sample quality
plausible reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plausible Negative Samples
Reverse Reinforcement Learning
Preference Optimization
Chain-of-Thought
Negative Sample Synthesis
πŸ”Ž Similar Papers
No similar papers found.