🤖 AI Summary
This work addresses the inefficiency and performance degradation of large language model agents in long-horizon tasks, often caused by redundant or low-quality tool invocations. The study introduces entropy reduction as a novel supervisory signal, demonstrating its strong positive correlation with high-quality tool usage. To leverage this insight, the authors propose a dual-reward mechanism: a sparse outcome-based reward to enhance efficiency and a dense process-oriented reward to improve overall performance. Experimental results show that the approach reduces the average number of tool calls by 72.07% compared to baseline methods while maintaining task performance. Moreover, the dense reward strategy yields a significant 22.27% performance gain, overcoming the limitations of conventional paradigms that rely solely on final outcome feedback.
📝 Abstract
Tool-using agents based on Large Language Models (LLMs) excel in tasks such as mathematical reasoning and multi-hop question answering. However, in long trajectories, agents often trigger excessive and low-quality tool calls, increasing latency and degrading inference performance, making managing tool-use behavior challenging. In this work, we conduct entropy-based pilot experiments and observe a strong positive correlation between entropy reduction and high-quality tool calls. Building on this finding, we propose using entropy reduction as a supervisory signal and design two reward strategies to address the differing needs of optimizing tool-use behavior. Sparse outcome rewards provide coarse, trajectory-level guidance to improve efficiency, while dense process rewards offer fine-grained supervision to enhance performance. Experiments across diverse domains show that both reward designs improve tool-use behavior: the former reduces tool calls by 72.07% compared to the average of baselines, while the latter improves performance by 22.27%. These results position entropy reduction as a key mechanism for enhancing tool-use behavior, enabling agents to be more adaptive in real-world applications.