Verification-Guided Context Optimization for Tool Calling via Hierarchical LLMs-as-Editors

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In industrial settings, LLM-based tool invocation suffers from semantic mismatch between human-written documentation and model comprehension—especially when hundreds of functionally overlapping tools coexist—leading to poor scalability, high ambiguity, and weak robustness. To address this, we propose a verification-guided context optimization framework featuring a novel hierarchical LLM editor architecture that integrates state awareness, action-specific modeling, and iterative verification feedback, enabling low-cost, subtask-specialized editing. Our method combines structure-aware context reconstruction, offline learning-driven hierarchical editing, and leveraging LLMs as editors—via prompting or fine-tuning. Evaluated on large-scale single-turn tool invocation tasks, it significantly improves accuracy, robustness, and cross-model generalization, outperforming multi-turn reasoning approaches.

Technology Category

Application Category

📝 Abstract
Tool calling enables large language models (LLMs) to interact with external environments through tool invocation, providing a practical way to overcome the limitations of pretraining. However, the effectiveness of tool use depends heavily on the quality of the associated documentation and knowledge base context. These materials are usually written for human users and are often misaligned with how LLMs interpret information. This problem is even more pronounced in industrial settings, where hundreds of tools with overlapping functionality create challenges in scalability, variability, and ambiguity. We propose Verification-Guided Context Optimization (VGCO), a framework that uses LLMs as editors to automatically refine tool-related documentation and knowledge base context. VGCO works in two stages. First, Evaluation collects real-world failure cases and identifies mismatches between tools and their context. Second, Optimization performs hierarchical editing through offline learning with structure-aware, in-context optimization. The novelty of our LLM editors has three main aspects. First, they use a hierarchical structure that naturally integrates into the tool-calling workflow. Second, they are state-aware, action-specific, and verification-guided, which constrains the search space and enables efficient, targeted improvements. Third, they enable cost-efficient sub-task specialization, either by prompt engineering large editor models or by post-training smaller editor models. Unlike prior work that emphasizes multi-turn reasoning, VGCO focuses on the single-turn, large-scale tool-calling problem and achieves significant improvements in accuracy, robustness, and generalization across LLMs.
Problem

Research questions and friction points this paper is trying to address.

Optimizes tool documentation for LLM interpretation
Addresses scalability in industrial multi-tool environments
Improves single-turn tool-calling accuracy and robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical LLM editors refine tool documentation
Verification-guided optimization targets context mismatches
Cost-efficient specialization improves single-turn tool calling
🔎 Similar Papers