HEXGEN-TEXT2SQL: Optimizing LLM Inference Request Scheduling for Agentic Text-to-SQL Workflow

๐Ÿ“… 2025-05-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing LLM serving frameworks struggle to jointly handle task dependencies, dynamic latency fluctuations, and GPU hardware heterogeneity in production-grade, multi-stage, low-latency Text-to-SQL systemsโ€”leading to frequent SLO violations and inefficient scheduling. Method: We propose a hierarchical scheduling architecture comprising global load-balanced request dispatching and local adaptive urgency-aware scheduling. We introduce the first simulation-driven lightweight online hyperparameter optimization method, integrated with workflow feature modeling, multi-stage dependency graph management, and heterogeneous-GPU-aware dynamic priority assignment. Results: Evaluated on a realistic Text-to-SQL benchmark, our system achieves 1.41ร— higher average SLO compliance rate (up to 1.67ร—) and 1.65ร— higher throughput (up to 1.75ร—) compared to vLLM, significantly reducing SLO violations.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent advances in leveraging the agentic paradigm of large language models (LLMs) utilization have significantly enhanced Text-to-SQL capabilities, enabling users without specialized database expertise to query data intuitively. However, deploying these agentic LLM-based Text-to-SQL systems in production poses substantial challenges due to their inherently multi-stage workflows, stringent latency constraints, and potentially heterogeneous GPU infrastructure in enterprise environments. Current LLM serving frameworks lack effective mechanisms for handling interdependent inference tasks, dynamic latency variability, and resource heterogeneity, leading to suboptimal performance and frequent service-level objective (SLO) violations. In this paper, we introduce HEXGEN-TEXT2SQL, a novel framework designed explicitly to schedule and execute agentic multi-stage LLM-based Text-to-SQL workflows on heterogeneous GPU clusters that handle multi-tenant end-to-end queries. HEXGEN-TEXT2SQL introduce a hierarchical scheduling approach combining global workload-balanced task dispatching and local adaptive urgency-guided prioritization, guided by a systematic analysis of agentic Text-to-SQL workflows. Additionally, we propose a lightweight simulation-based method for tuning critical scheduling hyperparameters, further enhancing robustness and adaptability. Our extensive evaluation on realistic Text-to-SQL benchmarks demonstrates that HEXGEN-TEXT2SQL significantly outperforms state-of-the-art LLM serving frameworks. Specifically, HEXGEN-TEXT2SQL reduces latency deadlines by up to 1.67$ imes$ (average: 1.41$ imes$) and improves system throughput by up to 1.75$ imes$ (average: 1.65$ imes$) compared to vLLM under diverse, realistic workload conditions. Our code is available at https://github.com/Relaxed-System-Lab/Hexgen-Flow.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM inference scheduling for multi-stage Text-to-SQL workflows
Addressing latency constraints and GPU heterogeneity in agentic systems
Improving performance and SLO compliance in LLM-based SQL query systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical scheduling for multi-stage LLM workflows
Lightweight simulation-based hyperparameter tuning
Optimized task dispatching on heterogeneous GPU clusters
๐Ÿ”Ž Similar Papers
No similar papers found.
You Peng
You Peng
Dow Inc
Machine VisionTime Series ForecastingBayesian OptimizationSensor Fusion
Y
Youhe Jiang
Hong Kong University of Science and Technology
C
Chen Wang
Tsinghua University
B
Binhang Yuan
Hong Kong University of Science and Technology