🤖 AI Summary
Reinforcement learning (RL) for Text2SQL is prone to reward hacking—where policies exploit spurious correlations in sparse or noisy reward signals (e.g., execution accuracy), leading to syntactically invalid or semantically inconsistent SQL outputs.
Method: We propose a theoretically grounded constrained RL framework that dynamically balances reward signals (e.g., execution feedback) against multi-source hard constraints—including syntactic validity, semantic consistency with the natural language query, and program executability. Built upon advanced RL algorithms such as GRPO and DAPO, our method adaptively optimizes the policy while enforcing constraint satisfaction via Lagrangian relaxation and dual ascent, thereby intrinsically mitigating reward hacking.
Contribution/Results: Evaluated on multiple mainstream Text2SQL benchmarks (e.g., Spider, CoSQL), our approach achieves state-of-the-art performance in both logical form accuracy and execution success rate. It further enhances model generalization and robustness against distributional shifts and adversarial perturbations, without requiring task-specific heuristics or post-hoc filtering.
📝 Abstract
Reinforcement learning (RL) has demonstrated significant promise in enhancing the reasoning capabilities of Text2SQL LLMs, especially with advanced algorithms such as GRPO and DAPO. However, the performance of these methods is highly sensitive to the design of reward functions. Inappropriate rewards can lead to reward hacking, where models exploit loopholes in the reward structure to achieve high scores without genuinely solving the task. This work considers a constrained RL framework for Text2SQL that incorporates natural and interpretable reward and constraint signals, while dynamically balancing trade-offs among them during the training. We establish the theoretical guarantees of our constrained RL framework and our numerical experiments on the well-known Text2SQL datasets substantiate the improvement of our approach over the state-of-the-art RL-trained LLMs.