🤖 AI Summary
Natural language queries in industrial settings often require joint retrieval across heterogeneous data sources—relational databases and RESTful APIs—posing significant challenges for semantic understanding, cross-source coordination, and result integration.
Method: This paper proposes a declarative multi-source query framework that leverages LLMs to semantically parse NL questions into unified, executable declarative query plans. These plans automatically orchestrate hybrid data source invocations (SQL execution and API calls) and fuse results end-to-end, without generating imperative code or relying on opaque agent-based reasoning.
Contribution/Results: The framework introduces composable, verifiable declarative abstractions that decouple schema alignment from execution logic. We release MultiSourceBench—the first benchmark for database/API hybrid querying—and demonstrate that our approach outperforms state-of-the-art LLM agents and code-generation methods by 23.6% average accuracy gain, with superior robustness. All code and data are open-sourced.
📝 Abstract
In many industrial settings, users wish to ask questions in natural language, the answers to which require assembling information from diverse structured data sources. With the advent of Large Language Models (LLMs), applications can now translate natural language questions into a set of API calls or database calls, execute them, and combine the results into an appropriate natural language response. However, these applications remain impractical in realistic industrial settings because they do not cope with the data source heterogeneity that typifies such environments. In this work, we simulate the heterogeneity of real industry settings by introducing two extensions of the popular Spider benchmark dataset that require a combination of database and API calls. Then, we introduce a declarative approach to handling such data heterogeneity and demonstrate that it copes with data source heterogeneity significantly better than state-of-the-art LLM-based agentic or imperative code generation systems. Our augmented benchmarks are available to the research community.