GOAT: A Training Framework for Goal-Oriented Agent with Tools

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) struggle to plan and execute multi-step, interdependent API calls for goal-oriented tasks, primarily due to scarce high-quality training data and limited tool-use capability in open-source models. Method: We propose GOAT, a self-supervised training framework that automatically synthesizes multi-step tool-calling data without human annotation. GOAT parses API documentation, integrates chain-of-thought reasoning with reinforcement learning, and explicitly models semantic dependencies among API invocations. Contribution/Results: GOAT is the first framework to explicitly encode inter-API semantic dependencies and support end-to-end planning and execution. Experiments demonstrate state-of-the-art performance across multiple goal-oriented benchmarks and a newly introduced evaluation suite, GOATBench, significantly enhancing the complex tool-use proficiency of open-source LLMs.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have recently been extended beyond traditional text generation to serve as interactive agents capable of using external tools based on user intent. However, current LLM agents still show limited ability to handle goal-oriented queries, which require decomposing a high-level objective into multiple interdependent API calls with correct planning and execution. Current approaches mainly rely on zero-shot evaluation due to the absence of training data. While proprietary closed-source models such as GPT-4 demonstrate strong reasoning abilities, smaller open-source models struggle to perform complex tool use effectively. Thus, we propose a novel training framework GOAT, which enables fine-tuning of LLM agents in a human annotation-free setting. GOAT automatically constructs synthetic datasets of goal-oriented API execution tasks directly from given API documents, equipping models with the ability to reason over interdependent calls and generate coherent responses. Through extensive experiments, we show that GOAT-trained agents achieve state-of-the-art performance across multiple existing goal-oriented benchmarks. In addition, we introduce GOATBench, a new goal-oriented API execution benchmark, and demonstrate that agents trained with GOAT also excel in this setting. These results highlight GOAT as a practical path toward building robust open-source LLM agents capable of complex reasoning and tool use.
Problem

Research questions and friction points this paper is trying to address.

Enhancing goal-oriented query handling through API decomposition
Addressing limited training data for complex tool-use reasoning
Improving open-source LLM agents' interdependent API execution ability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically constructs synthetic datasets from API documents
Enables fine-tuning LLM agents without human annotation
Trains models to reason over interdependent API calls
🔎 Similar Papers
No similar papers found.
H
Hyunji Min
Korea University
S
Sangwon Jung
Korea University, KAIST
J
Junyoung Sung
Korea University
D
Dosung Lee
Korea University
L
Leekyeung Han
Korea University
Paul Hongsuck Seo
Paul Hongsuck Seo
Korea University
Multimodal Interactive IntelligenceVisionSpeech and Language Understanding