🤖 AI Summary
Large language models (LLMs) struggle to plan and execute multi-step, interdependent API calls for goal-oriented tasks, primarily due to scarce high-quality training data and limited tool-use capability in open-source models.
Method: We propose GOAT, a self-supervised training framework that automatically synthesizes multi-step tool-calling data without human annotation. GOAT parses API documentation, integrates chain-of-thought reasoning with reinforcement learning, and explicitly models semantic dependencies among API invocations.
Contribution/Results: GOAT is the first framework to explicitly encode inter-API semantic dependencies and support end-to-end planning and execution. Experiments demonstrate state-of-the-art performance across multiple goal-oriented benchmarks and a newly introduced evaluation suite, GOATBench, significantly enhancing the complex tool-use proficiency of open-source LLMs.
📝 Abstract
Large language models (LLMs) have recently been extended beyond traditional text generation to serve as interactive agents capable of using external tools based on user intent. However, current LLM agents still show limited ability to handle goal-oriented queries, which require decomposing a high-level objective into multiple interdependent API calls with correct planning and execution. Current approaches mainly rely on zero-shot evaluation due to the absence of training data. While proprietary closed-source models such as GPT-4 demonstrate strong reasoning abilities, smaller open-source models struggle to perform complex tool use effectively. Thus, we propose a novel training framework GOAT, which enables fine-tuning of LLM agents in a human annotation-free setting. GOAT automatically constructs synthetic datasets of goal-oriented API execution tasks directly from given API documents, equipping models with the ability to reason over interdependent calls and generate coherent responses. Through extensive experiments, we show that GOAT-trained agents achieve state-of-the-art performance across multiple existing goal-oriented benchmarks. In addition, we introduce GOATBench, a new goal-oriented API execution benchmark, and demonstrate that agents trained with GOAT also excel in this setting. These results highlight GOAT as a practical path toward building robust open-source LLM agents capable of complex reasoning and tool use.