🤖 AI Summary
Large language models (LLMs) struggle with zero-shot external tool invocation—especially when tool documentation is missing, incomplete, or noisy, and neither human annotations nor domain-specific prior knowledge are available.
Method: We propose a fully automated tool instruction optimization framework centered on “tool self-play”: a black-box exploration phase followed by input-output behavioral clustering, dynamic prompt reconstruction, and a feedback-driven self-verification loop to autonomously infer tool functionality and generate high-quality usage examples.
Contribution/Results: Our method requires no fine-tuning, no labeled data, and no access to tool documentation or source code, making it compatible with both open- and closed-weight LLMs. Evaluated across diverse real-world tasks, it significantly improves zero-shot tool-call accuracy, demonstrating strong generalization and plug-and-play capability.
📝 Abstract
Large language models (LLMs) are increasingly integrated with specialized external tools, yet many tasks demand zero-shot tool usage with minimal or noisy documentation. Existing solutions rely on manual rewriting or labeled data for validation, making them inapplicable in true zero-shot settings. To address these challenges, we propose PLAY2PROMPT, an automated framework that systematically"plays"with each tool to explore its input-output behaviors. Through this iterative trial-and-error process, PLAY2PROMPT refines tool documentation and generates usage examples without any labeled data. These examples not only guide LLM inference but also serve as validation to further enhance tool utilization. Extensive experiments on real-world tasks demonstrate that PLAY2PROMPT significantly improves zero-shot tool performance across both open and closed models, offering a scalable and effective solution for domain-specific tool integration.