Text2GQL-Bench: A Text to Graph Query Language Benchmark [Experiment, Analysis&Benchmark]

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the absence of a high-quality, multi-domain benchmark supporting diverse graph query languages (GQLs) in current text-to-GQL research. To bridge this gap, the authors construct the first multi-GQL dataset spanning 13 domains and comprising 178,184 question–query pairs, accompanied by a scalable data generation framework and a multi-dimensional evaluation protocol assessing syntactic correctness, semantic alignment, and execution accuracy. Experimental results reveal a significant dialect gap for large language models in generating ISO-compliant GQL: under zero-shot settings, the strongest model achieves only 4% execution accuracy, which improves to approximately 50% with 3-shot prompting. Notably, a fine-tuned 8B open-source model attains 45.1% execution accuracy and 90.8% syntactic validity, highlighting both the challenges and potential of domain adaptation in text-to-GQL tasks.

Technology Category

Application Category

📝 Abstract
Graph models are fundamental to data analysis in domains rich with complex relationships. Text-to-Graph-Query-Language (Text-to-GQL) systems act as a translator, converting natural language into executable graph queries. This capability allows Large Language Models (LLMs) to directly analyze and manipulate graph data, posi-tioning them as powerful agent infrastructures for Graph Database Management System (GDBMS). Despite recent progress, existing datasets are often limited in domain coverage, supported graph query languages, or evaluation scope. The advancement of Text-to-GQL systems is hindered by the lack of high-quality benchmark datasets and evaluation methods to systematically compare model capabilities across different graph query languages and domains. In this work, we present Text2GQL-Bench, a unified Text-to-GQL benchmark designed to address these limitations. Text2GQL-Bench couples a multi-GQL dataset that has 178,184 (Question, Query) pairs spanning 13 domains, with a scalable construction framework that generates datasets in different domains, question abstraction levels, and GQLs with heterogeneous resources. To support compre-hensive assessment, we introduce an evaluation method that goes beyond a single end-to-end metric by jointly reporting grammatical validity, similarity, semantic alignment, and execution accuracy. Our evaluation uncovers a stark dialect gap in ISO-GQL generation: even strong LLMs achieve only at most 4% execution accuracy (EX) in zero-shot settings, though a fixed 3-shot prompt raises accuracy to around 50%, the grammatical validity remains lower than 70%. Moreover, a fine-tuned 8B open-weight model reaches 45.1% EX, and 90.8% grammatical validity, demonstrating that most of the performance jump is unlocked by exposure to sufficient ISO-GQL examples.
Problem

Research questions and friction points this paper is trying to address.

Text-to-GQL
benchmark
graph query language
evaluation
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Text-to-GQL
Graph Query Language
Benchmark
LLM Evaluation
ISO-GQL
🔎 Similar Papers
No similar papers found.