π€ AI Summary
This work addresses limitations in existing financial dialogue data synthesis methods, which rely on backward generation paradigms that produce overly explicit queries, lack real-world event-driven dynamics, and struggle to simulate tool retrieval in large-scale tool spaces. To overcome these challenges, we propose a forward synthesis framework that leverages role-guided instructions, atomic tool composition, and dynamic tool retrieval to generate dialogues more representative of authentic financial scenarios. We construct a comprehensive tool library comprising 43,066 functions and synthesize 148,000 high-quality dialogue instances. Furthermore, we establish the first benchmark specifically designed for evaluating financial tool usage. Experimental results demonstrate that models trained on our synthesized data achieve a 21.06% improvement in tool-calling accuracy.
π Abstract
Tool-use capabilities are vital for Large Language Models (LLMs) in finance, a domain characterized by massive investment targets and data-intensive inquiries. However, existing data synthesis methods typically rely on a reverse synthesis paradigm, generating user queries from pre-sampled tools. This approach inevitably introduces artificial explicitness, yielding queries that fail to capture the implicit, event-driven nature of real-world needs. Moreover, its reliance on static tool sets overlooks the dynamic retrieval process required to navigate massive tool spaces. To address these challenges, we introduce \textit{FinToolSyn}, a forward synthesis framework designed to generate high-quality financial dialogues. Progressing from persona instruction and atomic tool synthesis to dynamic retrieval dialogue generation, our pipeline constructs a repository of 43,066 tools and synthesizes over 148k dialogue instances, incorporating dynamic retrieval to emulate the noisy candidate sets typical of massive tool spaces. We also establish a dedicated benchmark to evaluate tool-calling capabilities in realistic financial scenarios. Extensive experiments demonstrate that models trained on FinToolSyn achieve a 21.06\% improvement, providing a robust foundation for tool learning in financial scenarios.