MTSQL-R1: Towards Long-Horizon Multi-Turn Text-to-SQL via Agentic Training

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches simplify multi-turn Text-to-SQL into single-turn translation, neglecting execution validation and dialogue state consistency—leading to unexecutable or semantically incoherent SQL queries. This work proposes an agent-based training framework that models multi-turn semantic parsing as a Markov decision process, integrating database execution feedback and persistent dialogue memory to establish a closed-loop reasoning cycle: “generate → execute → verify → revise.” Innovatively combining reinforcement learning with environment-driven iterative optimization, our method significantly outperforms strong baselines on COSQL and SPARC. It is the first to achieve long-horizon, verifiable, and coherent controllable conversational SQL generation. Empirical results validate the critical role of execution-awareness and memory-guided reasoning in multi-turn semantic parsing.

Technology Category

Application Category

📝 Abstract
Multi-turn Text-to-SQL aims to translate a user's conversational utterances into executable SQL while preserving dialogue coherence and grounding to the target schema. However, most existing systems only regard this task as a simple text translation task and follow a short-horizon paradigm, generating a query per turn without execution, explicit verification, and refinement, which leads to non-executable or incoherent outputs. We present MTSQL-R1, an agentic training framework for long-horizon multi-turn Text-to-SQL. We cast the task as a Markov Decision Process (MDP) in which an agent interacts with (i) a database for execution feedback and (ii) a persistent dialogue memory for coherence verification, performing an iterative propose to execute -> verify -> refine cycle until all checks pass. Experiments on COSQL and SPARC demonstrate that MTSQL-R1 consistently outperforms strong baselines, highlighting the importance of environment-driven verification and memory-guided refinement for conversational semantic parsing. Full recipes (including code, trained models, logs, reasoning trajectories, etc.) will be released after the internal review to contribute to community research.
Problem

Research questions and friction points this paper is trying to address.

Addresses multi-turn text-to-SQL translation with dialogue coherence
Solves non-executable SQL through execution feedback and iterative refinement
Enhances conversational semantic parsing via environment verification and memory guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic training framework for multi-turn Text-to-SQL
Iterative execute-verify-refine cycle with database feedback
Persistent dialogue memory for coherence verification