Multilingual Text-to-SQL: Benchmarking the Limits of Language Models with Collaborative Language Agents

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multilingual Text-to-SQL research is hindered by English-centric benchmarks and a lack of evaluation frameworks capturing real-world linguistic diversity. To address this, we introduce MultiSpider 2.0—the first multilingual Text-to-SQL benchmark covering eight typologically diverse languages while preserving complex SQL constructs (e.g., nested queries, multi-table joins). Building upon it, we propose a collaborative language-agent framework that integrates state-of-the-art LLMs (e.g., DeepSeek-R1, OpenAI o1) to jointly perform cross-lingual semantic alignment and SQL refinement. Experimental results reveal a severe performance gap: mainstream LLMs achieve only 4% execution accuracy on MultiSpider 2.0—dramatically lower than their 60% on English Spider—highlighting a critical bottleneck in multilingual database understanding. Our framework lifts accuracy to 15%, marking the first systematic identification and mitigation of this limitation.

Technology Category

Application Category

📝 Abstract
Text-to-SQL enables natural access to databases, yet most benchmarks are English-only, limiting multilingual progress. We introduce MultiSpider 2.0, extending Spider 2.0 to eight languages (English, German, French, Spanish, Portuguese, Japanese, Chinese, Vietnamese). It preserves Spider 2.0's structural difficulty while adding linguistic and dialectal variability, demanding deeper reasoning for complex SQL. On this benchmark, state-of-the-art LLMs (such as DeepSeek-R1 and OpenAI o1) reach only 4% execution accuracy when relying on intrinsic reasoning, versus 60% on MultiSpider 1.0. Therefore, we provide a collaboration-driven language agents baseline that iteratively refines queries, improving accuracy to 15%. These results reveal a substantial multilingual gap and motivate methods that are robust across languages and ready for real-world enterprise deployment. Our benchmark is available at https://github.com/phkhanhtrinh23/Multilingual_Text_to_SQL.
Problem

Research questions and friction points this paper is trying to address.

Addressing multilingual limitations in Text-to-SQL benchmarks beyond English-only datasets
Evaluating large language models' poor performance on complex multilingual SQL queries
Developing collaborative language agents to improve multilingual Text-to-SQL accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extended Spider 2.0 to eight diverse languages
Introduced collaboration-driven language agents for iterative refinement
Improved multilingual SQL accuracy from 4% to 15%
🔎 Similar Papers
No similar papers found.
K
Khanh Trinh Pham
Griffith University, Australia
T
Thu Huong Nguyen
Griffith University, Australia
Jun Jo
Jun Jo
Griffith University
satellite data analysismedical data analysisubiquitous roboticse-learning
Q
Quoc Viet Hung Nguyen
Griffith University, Australia
Thanh Tam Nguyen
Thanh Tam Nguyen
Lecturer, Griffith University
Social Network MiningStream ProcessingBig DataPrivacy-Preserving MLRecommender Systems