🤖 AI Summary
Current large language models exhibit insufficient generalization and robustness in mathematical reasoning, primarily due to reliance on spurious surface-level patterns rather than genuine logical inference.
Method: We propose AdaR, an adaptive reasoning framework that jointly optimizes reasoning behavior via logic-equivalent query synthesis—based on variable substitution—and Reinforcement Learning with Verified Rewards (RLVR). AdaR integrates code execution for answer verification and plausibility checking to enhance synthetic data quality, and supports instruction-tuning of base models.
Contribution/Results: Experiments demonstrate that AdaR achieves significant performance gains across multiple mathematical reasoning benchmarks—including GSM8K, MATH, and SVAMP—while maintaining high data efficiency, strong out-of-distribution generalization, and improved interpretability. Notably, AdaR introduces the first logic-driven, adaptive reasoning mechanism that explicitly decouples superficial heuristics from authentic deductive reasoning, thereby mitigating spurious correlations in mathematical problem solving.
📝 Abstract
Mathematical reasoning is a primary indicator of large language models (LLMs) intelligence. However, existing LLMs exhibit failures of robustness and generalization. This paper attributes these deficiencies to spurious reasoning, i.e., producing answers from superficial features. To address this challenge, we propose the AdaR framework to enable adaptive reasoning, wherein models rely on problem-solving logic to produce answers. AdaR synthesizes logically equivalent queries by varying variable values, and trains models with RLVR on these data to penalize spurious logic while encouraging adaptive logic. To improve data quality, we extract the problem-solving logic from the original query and generate the corresponding answer by code execution, then apply a sanity check. Experimental results demonstrate that AdaR improves robustness and generalization, achieving substantial improvement in mathematical reasoning while maintaining high data efficiency. Analysis indicates that data synthesis and RLVR function in a coordinated manner to enable adaptive reasoning in LLMs. Subsequent analyses derive key design insights into the effect of critical factors and the applicability to instruct LLMs. Our project is available at https://github.com/LaiZhejian/AdaR