🤖 AI Summary
Current reinforcement learning (RL) frameworks for large language model (LLM) tool use suffer from inefficiency and instability, particularly due to reliance on external environments and coarse-grained reward signals.
Method: This paper proposes a closed-loop RL paradigm specifically designed for tool invocation. It introduces an automated environment generation pipeline that supports scenario decomposition, function integration, complexity grading, and automatic documentation generation. A verifiable, fine-grained reward mechanism is designed to enable stable, localized training without external tool dependencies.
Contribution/Results: The framework jointly optimizes environment construction and reward design to balance task precision with generalization capability preservation. Experiments across multiple LLM scales demonstrate significant improvements in tool-call success rate and task completion rate. Ablation and parameter analysis confirm that the method effectively drives updates in underlying MLP layers, enhancing contextual understanding and reasoning capabilities.
📝 Abstract
Effective tool use is essential for large language models (LLMs) to interact meaningfully with their environment. However, progress is limited by the lack of efficient reinforcement learning (RL) frameworks specifically designed for tool use, due to challenges in constructing stable training environments and designing verifiable reward mechanisms. To address this, we propose an automated environment construction pipeline, incorporating scenario decomposition, document generation, function integration, complexity scaling, and localized deployment. This enables the creation of high-quality training environments that provide detailed and measurable feedback without relying on external tools. Additionally, we introduce a verifiable reward mechanism that evaluates both the precision of tool use and the completeness of task execution. When combined with trajectory data collected from the constructed environments, this mechanism integrates seamlessly with standard RL algorithms to facilitate feedback-driven model training. Experiments on LLMs of varying scales demonstrate that our approach significantly enhances the models' tool-use performance without degrading their general capabilities, regardless of inference modes or training algorithms. Our analysis suggests that these gains result from improved context understanding and reasoning, driven by updates to the lower-layer MLP parameters in models.