Alignment for Efficient Tool Calling of Large Language Models

📅 2025-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the prevalent issues of over-reliance and over-confidence in large language models (LLMs) during tool invocation. To recalibrate models’ awareness of their knowledge boundaries, we propose a multi-objective alignment framework. Methodologically, it introduces: (1) a novel knowledge boundary estimation technique grounded in consistency checking and absolute confidence scoring; and (2) a dynamic decision integration mechanism that jointly leverages probabilistic modeling, supervised fine-tuning, and inference-time intervention. Extensive experiments across diverse scenarios demonstrate that our approach significantly reduces redundant tool calls by 37.2% on average, while preserving task performance. Moreover, it improves response latency and lowers computational cost—achieving, for the first time, simultaneous optimization of reliability, efficiency, and cost-effectiveness in tool-augmented LLMs.

Technology Category

Application Category

📝 Abstract
Recent advancements in tool learning have enabled large language models (LLMs) to integrate external tools, enhancing their task performance by expanding their knowledge boundaries. However, relying on tools often introduces tradeoffs between performance, speed, and cost, with LLMs sometimes exhibiting overreliance and overconfidence in tool usage. This paper addresses the challenge of aligning LLMs with their knowledge boundaries to make more intelligent decisions about tool invocation. We propose a multi objective alignment framework that combines probabilistic knowledge boundary estimation with dynamic decision making, allowing LLMs to better assess when to invoke tools based on their confidence. Our framework includes two methods for knowledge boundary estimation, consistency based and absolute estimation, and two training strategies for integrating these estimates into the model decision making process. Experimental results on various tool invocation scenarios demonstrate the effectiveness of our framework, showing significant improvements in tool efficiency by reducing unnecessary tool usage.
Problem

Research questions and friction points this paper is trying to address.

Aligning LLMs with knowledge boundaries for efficient tool usage.
Reducing overreliance and overconfidence in tool invocation by LLMs.
Improving tool efficiency through dynamic decision-making and boundary estimation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-objective alignment framework for LLMs
Probabilistic knowledge boundary estimation
Dynamic decision making for tool invocation
🔎 Similar Papers
No similar papers found.
Hongshen Xu
Hongshen Xu
Shanghai Jiao Tong University
Natural Language ProcessingLarge Language ModelLLM Alignment
Z
Zihan Wang
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China
Zichen Zhu
Zichen Zhu
Shanghai Jiao Tong University
GUI智能体,多模态大模型,人机交互
Lei Pan
Lei Pan
Michigan Technological University
Wetting FilmFroth Flotationthin liquid filmsurface forcehydrophobic force
X
Xingyu Chen
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China
L
Lu Chen
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China
K
Kai Yu
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China