STARQA: A Question Answering Dataset for Complex Analytical Reasoning over Structured Databases

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Text-to-SQL approaches struggle with database question answering requiring complex analytical reasoning—such as aggregation, time-series analysis, or contextual understanding—due to the low complexity of mainstream benchmarks and the absence of evaluation protocols for higher-order reasoning. Method: We propose STARQA, the first benchmark dataset explicitly designed for complex analytical reasoning over structured databases, and introduce Text2SQLCode—a novel two-stage framework that decouples query processing into SQL-based data extraction and Python-based logical reasoning, thereby overcoming expressivity and reasoning limitations inherent in end-to-end SQL generation. Contribution/Results: By integrating large language models, semantic parsing, SQL generation, and Python program synthesis, Text2SQLCode enables cross-language analytical task decomposition. Experiments demonstrate that SQL+Python collaboration substantially outperforms monolithic SQL generation; however, state-of-the-art LMs still exhibit significant performance gaps on STARQA, indicating substantial room for improvement.

Technology Category

Application Category

📝 Abstract
Semantic parsing methods for converting text to SQL queries enable question answering over structured data and can greatly benefit analysts who routinely perform complex analytics on vast data stored in specialized relational databases. Although several benchmarks measure the abilities of text to SQL, the complexity of their questions is inherently limited by the level of expressiveness in query languages and none focus explicitly on questions involving complex analytical reasoning which require operations such as calculations over aggregate analytics, time series analysis or scenario understanding. In this paper, we introduce STARQA, the first public human-created dataset of complex analytical reasoning questions and answers on three specialized-domain databases. In addition to generating SQL directly using LLMs, we evaluate a novel approach (Text2SQLCode) that decomposes the task into a combination of SQL and Python: SQL is responsible for data fetching, and Python more naturally performs reasoning. Our results demonstrate that identifying and combining the abilities of SQL and Python is beneficial compared to using SQL alone, yet the dataset still remains quite challenging for the existing state-of-the-art LLMs.
Problem

Research questions and friction points this paper is trying to address.

Addresses complex analytical reasoning questions over structured databases
Focuses on operations like aggregate analytics and time series analysis
Evaluates combining SQL and Python for complex reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dataset for complex analytical reasoning questions
Decomposes task into SQL and Python combination
SQL fetches data while Python performs reasoning
🔎 Similar Papers
No similar papers found.