π€ AI Summary
This work addresses the challenge that large language models often generate inaccurate function parameters when invoking functions with complex interdependent arguments, due to the absence of explicit reasoning mechanisms. To overcome this limitation, the authors propose a novel function-calling framework that embeds explicit reasoning at both the function and parameter levels. The approach introduces a universal βthinkβ parameter to enhance decision expressivity, dynamically refines parameter descriptions, and automatically triggers fine-grained reasoning based on a complexity-aware scoring mechanism. Notably, the method requires no modifications to the underlying model architecture and preserves full API compatibility. It is the first to enable parameter-level reasoning guidance and improves alignment with human expectations through reasoning coherence. Experiments on ToolBench demonstrate significant gains in generation accuracy and reasoning consistency for multi-parameter functions, while also enhancing the interpretability of agent behavior.
π Abstract
Large language models (LLMs) have demonstrated remarkable capabilities in function calling for autonomous agents, yet current mechanisms lack explicit reasoning transparency during parameter generation, particularly for complex functions with interdependent parameters. While existing approaches like chain-of-thought prompting operate at the agent level, they fail to provide fine-grained reasoning guidance for individual function parameters. To address these limitations, we propose Think-Augmented Function Calling (TAFC), a novel framework that enhances function calling accuracy through explicit reasoning at both function and parameter levels. Our method introduces a universal"think"parameter augmentation that enables models to articulate their decision-making process, with dynamic optimization for parameter descriptions to improve reasoning quality. For complex parameters, TAFC automatically triggers granular reasoning based on complexity scoring, ensuring appropriate justification for critical decisions. Additionally, we propose reasoning-guided optimization to align generated reasoning with human expectations. TAFC requires no architectural modifications to existing LLMs while maintaining full API compatibility. Evaluation on ToolBench across proprietary and open-source models demonstrates significant improvements in parameter generation accuracy and reasoning coherence for multi-parameter functions, while providing enhanced interpretability for debugging AI agent behaviors.