Generative Adversarial Reasoner: Enhancing LLM Reasoning with Adversarial Reinforcement Learning

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently commit procedural errors—such as arithmetic miscalculations, logical fragility, and pseudo-plausible reasoning steps—during mathematical reasoning. Method: We propose Generative Adversarial Reasoning (GAR), a novel framework featuring chain-of-thought fragmentation for fine-grained scrutiny and structured discriminative feedback. GAR enables joint policy-level adversarial optimization between a reasoning agent and a discriminator, generating dense, calibrated step-wise reward signals. It integrates policy-based adversarial reinforcement learning, logical slicing scheduling, and multi-objective reward shaping—including teacher distillation, preference alignment, and proof-oriented modeling. Contribution/Results: GAR significantly improves credit assignment accuracy and sample efficiency. On AIME24, it boosts DeepSeek-R1-Distill-Qwen-7B and Llama-8B by 7.3 and 10.0 points, respectively, surpassing standard RL post-training baselines across all metrics. Moreover, it supports flexible downstream reasoning customization.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) with explicit reasoning capabilities excel at mathematical reasoning yet still commit process errors, such as incorrect calculations, brittle logic, and superficially plausible but invalid steps. In this paper, we introduce Generative Adversarial Reasoner, an on-policy joint training framework designed to enhance reasoning by co-evolving an LLM reasoner and an LLM-based discriminator through adversarial reinforcement learning. A compute-efficient review schedule partitions each reasoning chain into logically complete slices of comparable length, and the discriminator evaluates each slice's soundness with concise, structured justifications. Learning couples complementary signals: the LLM reasoner is rewarded for logically consistent steps that yield correct answers, while the discriminator earns rewards for correctly detecting errors or distinguishing traces in the reasoning process. This produces dense, well-calibrated, on-policy step-level rewards that supplement sparse exact-match signals, improving credit assignment, increasing sample efficiency, and enhancing overall reasoning quality of LLMs. Across various mathematical benchmarks, the method delivers consistent gains over strong baselines with standard RL post-training. Specifically, on AIME24, we improve DeepSeek-R1-Distill-Qwen-7B from 54.0 to 61.3 (+7.3) and DeepSeek-R1-Distill-Llama-8B from 43.7 to 53.7 (+10.0). The modular discriminator also enables flexible reward shaping for objectives such as teacher distillation, preference alignment, and mathematical proof-based reasoning.
Problem

Research questions and friction points this paper is trying to address.

LLMs make process errors like incorrect calculations and brittle logic in reasoning
Sparse exact-match rewards hinder credit assignment and sample efficiency in training
Existing methods lack dense step-level feedback to improve reasoning quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial reinforcement learning co-evolves reasoner and discriminator
Compute-efficient review schedule partitions reasoning chains into slices
Dense step-level rewards supplement sparse exact-match signals
🔎 Similar Papers
No similar papers found.
Q
Qihao Liu
Johns Hopkins University
L
Luoxin Ye
Johns Hopkins University
Wufei Ma
Wufei Ma
Johns Hopkins University
Computer VisionDeep Learning
Yu-Cheng Chou
Yu-Cheng Chou
Johns Hopkins University
MLLMReinforcement LearningComputer Vision
A
Alan L. Yuille
Johns Hopkins University