Think-Augmented Function Calling: Improving LLM Parameter Accuracy Through Embedded Reasoning

πŸ“… 2026-01-26
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge that large language models often generate inaccurate function parameters when invoking functions with complex interdependent arguments, due to the absence of explicit reasoning mechanisms. To overcome this limitation, the authors propose a novel function-calling framework that embeds explicit reasoning at both the function and parameter levels. The approach introduces a universal β€œthink” parameter to enhance decision expressivity, dynamically refines parameter descriptions, and automatically triggers fine-grained reasoning based on a complexity-aware scoring mechanism. Notably, the method requires no modifications to the underlying model architecture and preserves full API compatibility. It is the first to enable parameter-level reasoning guidance and improves alignment with human expectations through reasoning coherence. Experiments on ToolBench demonstrate significant gains in generation accuracy and reasoning consistency for multi-parameter functions, while also enhancing the interpretability of agent behavior.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) have demonstrated remarkable capabilities in function calling for autonomous agents, yet current mechanisms lack explicit reasoning transparency during parameter generation, particularly for complex functions with interdependent parameters. While existing approaches like chain-of-thought prompting operate at the agent level, they fail to provide fine-grained reasoning guidance for individual function parameters. To address these limitations, we propose Think-Augmented Function Calling (TAFC), a novel framework that enhances function calling accuracy through explicit reasoning at both function and parameter levels. Our method introduces a universal"think"parameter augmentation that enables models to articulate their decision-making process, with dynamic optimization for parameter descriptions to improve reasoning quality. For complex parameters, TAFC automatically triggers granular reasoning based on complexity scoring, ensuring appropriate justification for critical decisions. Additionally, we propose reasoning-guided optimization to align generated reasoning with human expectations. TAFC requires no architectural modifications to existing LLMs while maintaining full API compatibility. Evaluation on ToolBench across proprietary and open-source models demonstrates significant improvements in parameter generation accuracy and reasoning coherence for multi-parameter functions, while providing enhanced interpretability for debugging AI agent behaviors.
Problem

Research questions and friction points this paper is trying to address.

function calling
parameter accuracy
reasoning transparency
interdependent parameters
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Think-Augmented Function Calling
explicit reasoning
parameter-level reasoning
function calling
reasoning-guided optimization
πŸ”Ž Similar Papers
No similar papers found.
L
Lei Wei
Alibaba International Digital Commerce Group
J
Jinpeng Ou
School of Software and Microelectronics, Peking University
X
Xiao Peng
Alibaba International Digital Commerce Group
Bin Wang
Bin Wang
School of Software, Tsinghua University
Computer GraphicsGeometry ProcessingImage Understanding