Towards Effective Code-Integrated Reasoning

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses three core challenges in code-integrated reasoning: imprecise timing of tool invocation, inefficient utilization of execution feedback, and instability in tool-augmented reinforcement learning (RL) training. To this end, we propose an enhanced tool-augmented RL framework that jointly integrates interactive training with a code interpreter, progressive curriculum learning, and an exploration–stability co-optimization strategy—enabling models to dynamically acquire the capability to determine *when* to invoke tools, *how* to generate executable code, and *how* to iteratively refine reasoning. Evaluated on five mainstream mathematical reasoning benchmarks, our method significantly outperforms strong baselines, achieving an average +4.2% absolute accuracy gain while improving inference efficiency by reducing average step count by 18%. Notably, this is the first systematic study to demonstrate that code integration not only extends model capability boundaries but also yields dual gains—enhancing both reasoning quality and computational efficiency—through a closed-loop feedback mechanism.

Technology Category

Application Category

📝 Abstract
In this paper, we investigate code-integrated reasoning, where models generate code when necessary and integrate feedback by executing it through a code interpreter. To acquire this capability, models must learn when and how to use external code tools effectively, which is supported by tool-augmented reinforcement learning (RL) through interactive learning. Despite its benefits, tool-augmented RL can still suffer from potential instability in the learning dynamics. In light of this challenge, we present a systematic approach to improving the training effectiveness and stability of tool-augmented RL for code-integrated reasoning. Specifically, we develop enhanced training strategies that balance exploration and stability, progressively building tool-use capabilities while improving reasoning performance. Through extensive experiments on five mainstream mathematical reasoning benchmarks, our model demonstrates significant performance improvements over multiple competitive baselines. Furthermore, we conduct an in-depth analysis of the mechanism and effect of code-integrated reasoning, revealing several key insights, such as the extension of model's capability boundaries and the simultaneous improvement of reasoning efficiency through code integration. All data and code for reproducing this work are available at: https://github.com/RUCAIBox/CIR.
Problem

Research questions and friction points this paper is trying to address.

Enhancing code-integrated reasoning with tool-augmented RL
Improving training stability for effective code tool usage
Boosting reasoning performance via dynamic code integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Code-integrated reasoning with interpreter feedback
Tool-augmented reinforcement learning for stability
Enhanced training balancing exploration and stability
🔎 Similar Papers
No similar papers found.
F
Fei Bai
Gaoling School of Artificial Intelligence, Renmin University of China
Yingqian Min
Yingqian Min
Renmin University of China
Natural Language ProcessingLarge Language Models
B
Beichen Zhang
Gaoling School of Artificial Intelligence, Renmin University of China
Z
Zhipeng Chen
Gaoling School of Artificial Intelligence, Renmin University of China
Wayne Xin Zhao
Wayne Xin Zhao
Professor, Renmin University of China
Recommender SystemNatural Language ProcessingLarge Language Model
L
Lei Fang
Z
Zheng Liu
BAAI3DataCanvas Alaya NeW
Z
Zhongyuan Wang
BAAI3DataCanvas Alaya NeW
Ji-Rong Wen
Ji-Rong Wen
Gaoling School of Artificial Intelligence, Renmin University of China
Large Language ModelWeb SearchInformation RetrievalMachine Learning