🤖 AI Summary
Large language models (LLMs) frequently commit procedural errors—such as arithmetic miscalculations, logical fragility, and pseudo-plausible reasoning steps—during mathematical reasoning.
Method: We propose Generative Adversarial Reasoning (GAR), a novel framework featuring chain-of-thought fragmentation for fine-grained scrutiny and structured discriminative feedback. GAR enables joint policy-level adversarial optimization between a reasoning agent and a discriminator, generating dense, calibrated step-wise reward signals. It integrates policy-based adversarial reinforcement learning, logical slicing scheduling, and multi-objective reward shaping—including teacher distillation, preference alignment, and proof-oriented modeling.
Contribution/Results: GAR significantly improves credit assignment accuracy and sample efficiency. On AIME24, it boosts DeepSeek-R1-Distill-Qwen-7B and Llama-8B by 7.3 and 10.0 points, respectively, surpassing standard RL post-training baselines across all metrics. Moreover, it supports flexible downstream reasoning customization.
📝 Abstract
Large language models (LLMs) with explicit reasoning capabilities excel at mathematical reasoning yet still commit process errors, such as incorrect calculations, brittle logic, and superficially plausible but invalid steps. In this paper, we introduce Generative Adversarial Reasoner, an on-policy joint training framework designed to enhance reasoning by co-evolving an LLM reasoner and an LLM-based discriminator through adversarial reinforcement learning. A compute-efficient review schedule partitions each reasoning chain into logically complete slices of comparable length, and the discriminator evaluates each slice's soundness with concise, structured justifications. Learning couples complementary signals: the LLM reasoner is rewarded for logically consistent steps that yield correct answers, while the discriminator earns rewards for correctly detecting errors or distinguishing traces in the reasoning process. This produces dense, well-calibrated, on-policy step-level rewards that supplement sparse exact-match signals, improving credit assignment, increasing sample efficiency, and enhancing overall reasoning quality of LLMs. Across various mathematical benchmarks, the method delivers consistent gains over strong baselines with standard RL post-training. Specifically, on AIME24, we improve DeepSeek-R1-Distill-Qwen-7B from 54.0 to 61.3 (+7.3) and DeepSeek-R1-Distill-Llama-8B from 43.7 to 53.7 (+10.0). The modular discriminator also enables flexible reward shaping for objectives such as teacher distillation, preference alignment, and mathematical proof-based reasoning.