Rethinking the Role of Entropy in Optimizing Tool-Use Behaviors for Large Language Model Agents

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency and performance degradation of large language model agents in long-horizon tasks, often caused by redundant or low-quality tool invocations. The study introduces entropy reduction as a novel supervisory signal, demonstrating its strong positive correlation with high-quality tool usage. To leverage this insight, the authors propose a dual-reward mechanism: a sparse outcome-based reward to enhance efficiency and a dense process-oriented reward to improve overall performance. Experimental results show that the approach reduces the average number of tool calls by 72.07% compared to baseline methods while maintaining task performance. Moreover, the dense reward strategy yields a significant 22.27% performance gain, overcoming the limitations of conventional paradigms that rely solely on final outcome feedback.

Technology Category

Application Category

📝 Abstract
Tool-using agents based on Large Language Models (LLMs) excel in tasks such as mathematical reasoning and multi-hop question answering. However, in long trajectories, agents often trigger excessive and low-quality tool calls, increasing latency and degrading inference performance, making managing tool-use behavior challenging. In this work, we conduct entropy-based pilot experiments and observe a strong positive correlation between entropy reduction and high-quality tool calls. Building on this finding, we propose using entropy reduction as a supervisory signal and design two reward strategies to address the differing needs of optimizing tool-use behavior. Sparse outcome rewards provide coarse, trajectory-level guidance to improve efficiency, while dense process rewards offer fine-grained supervision to enhance performance. Experiments across diverse domains show that both reward designs improve tool-use behavior: the former reduces tool calls by 72.07% compared to the average of baselines, while the latter improves performance by 22.27%. These results position entropy reduction as a key mechanism for enhancing tool-use behavior, enabling agents to be more adaptive in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

tool-use behavior
large language models
entropy
tool calls
inference performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

entropy reduction
tool-use optimization
large language model agents
reward design
adaptive reasoning
🔎 Similar Papers
No similar papers found.