SyntheT2C: Generating Synthetic Data for Fine-Tuning Large Language Models on the Text2Cypher Task

📅 2024-06-15
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the severe scarcity of high-quality natural language–Cypher query pairs for Text2Cypher tasks—hindering large language model (LLM) performance—this paper proposes a dual-path synthetic data generation paradigm: integrating LLM-based prompt engineering with syntax-constrained, controllable template filling to jointly ensure semantic diversity and Cypher syntactic correctness. We introduce domain-knowledge-guided data distillation, structured template modeling, and explicit Cypher grammar constraints. As the first contribution of its kind, we construct MedT2C—a high-quality, medical-domain-specific synthetic dataset for Text2Cypher. Experiments demonstrate that our approach significantly improves accuracy across multiple LLMs (up to +23.7%), and supervised fine-tuning yields a 41% reduction in hallucination. Both the source code and the MedT2C dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Integrating Large Language Models (LLMs) with existing Knowledge Graph (KG) databases presents a promising avenue for enhancing LLMs' efficacy and mitigating their"hallucinations". Given that most KGs reside in graph databases accessible solely through specialized query languages (e.g., Cypher), it is critical to connect LLMs with KG databases by automating the translation of natural language into Cypher queries (termed as"Text2Cypher"task). Prior efforts tried to bolster LLMs' proficiency in Cypher generation through Supervised Fine-Tuning (SFT). However, these explorations are hindered by the lack of annotated datasets of Query-Cypher pairs, resulting from the labor-intensive and domain-specific nature of such annotation. In this study, we propose SyntheT2C, a methodology for constructing a synthetic Query-Cypher pair dataset, comprising two distinct pipelines: (1) LLM-based prompting and (2) template-filling. SyntheT2C is applied to two medical KG databases, culminating in the creation of a synthetic dataset, MedT2C. Comprehensive experiments demonstrate that the MedT2C dataset effectively enhances the performance of backbone LLMs on Text2Cypher task via SFT. Both the SyntheT2C codebase and the MedT2C dataset are released in https://github.com/ZGChung/SyntheT2C.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Text2Cypher Task
Lack of Annotated Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

SyntheT2C
Text2Cypher
Medical Databases
🔎 Similar Papers
No similar papers found.
Zijie Zhong
Zijie Zhong
Shanghai AI Laboratory
L
Linqing Zhong
Beihang University
Z
Zhaoze Sun
Beihang University
Q
Qingyun Jin
Beihang University
Zengchang Qin
Zengchang Qin
Beihang University
Machine LearningMultimedia RetrievalCollective IntelligenceUncertainty Modeling for Data
X
Xiaofan Zhang
Shanghai AI Laboratory