Reasoning and Tool-use Compete in Agentic RL:From Quantifying Interference to Disentangled Tuning

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in reinforcement learning for intelligent agents: the performance degradation caused by gradient interference when jointly training reasoning and tool-use capabilities. The study presents the first empirical characterization of this interference phenomenon and introduces Decoupled Action-Reasoning Tuning (DART), a novel framework that mitigates such conflicts. DART employs a Linear Effect Attribution System (LEAS) to quantify gradient interference between reasoning and action modules and leverages low-rank adaptation to decouple their parameter updates. Challenging the prevailing joint-training paradigm, the proposed method achieves an average performance gain of 6.35% within a single-model architecture—matching the effectiveness of multi-agent systems—while maintaining architectural simplicity and training efficiency.

Technology Category

Application Category

📝 Abstract
Agentic Reinforcement Learning (ARL) focuses on training large language models (LLMs) to interleave reasoning with external tool execution to solve complex tasks. Most existing ARL methods train a single shared model parameters to support both reasoning and tool use behaviors, implicitly assuming that joint training leads to improved overall agent performance. Despite its widespread adoption, this assumption has rarely been examined empirically. In this paper, we systematically investigate this assumption by introducing a Linear Effect Attribution System(LEAS), which provides quantitative evidence of interference between reasoning and tool-use behaviors. Through an in-depth analysis, we show that these two capabilities often induce misaligned gradient directions, leading to training interference that undermines the effectiveness of joint optimization and challenges the prevailing ARL paradigm. To address this issue, we propose Disentangled Action Reasoning Tuning(DART), a simple and efficient framework that explicitly decouples parameter updates for reasoning and tool-use via separate low-rank adaptation modules. Experimental results show that DART consistently outperforms baseline methods with averaged 6.35 percent improvements and achieves performance comparable to multi-agent systems that explicitly separate tool-use and reasoning using a single model.
Problem

Research questions and friction points this paper is trying to address.

Agentic Reinforcement Learning
reasoning
tool-use
training interference
parameter sharing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangled Tuning
Reasoning-Tool Interference
Linear Effect Attribution System
Low-Rank Adaptation
Agentic Reinforcement Learning
🔎 Similar Papers
No similar papers found.