Automatic End-to-End Data Integration using Large Language Models

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes the first fully large language model–driven, end-to-end data integration framework that eliminates the need for manual configuration, which traditionally incurs high costs and low efficiency. The system autonomously generates a complete integration pipeline encompassing schema mapping, value normalization, entity matching, and conflict resolution without human intervention. Evaluated on three real-world domains—gaming, music, and enterprise data—the GPT-5.2–based framework achieves integration performance comparable to or surpassing that of handcrafted systems. Notably, it accomplishes this at a remarkably low cost of approximately $10 per execution, substantially reducing human labor and operational overhead.

Technology Category

Application Category

📝 Abstract
Designing data integration pipelines typically requires substantial manual effort from data engineers to configure pipeline components and label training data. While LLMs have shown promise in handling individual steps of the integration process, their potential to replace all human input across end-to-end data integration pipelines has not been investigated. As a step toward exploring this potential, we present an automatic data integration pipeline that uses GPT-5.2 to generate all artifacts required to adapt the pipeline to specific use cases. These artifacts are schema mappings, value mappings for data normalization, training data for entity matching, and validation data for selecting conflict resolution heuristics in data fusion. We compare the performance of this LLM-based pipeline to the performance of human-designed pipelines along three case studies requiring the integration of video game, music, and company related data. Our experiments show that the LLM-based pipeline is able to produce similar results, for some tasks even better results, as the human-designed pipelines. End-to-end, the human and the LLM pipelines produce integrated datasets of comparable size and density. Having the LLM configure the pipelines costs approximately \$10 per case study, which represents only a small fraction of the cost of having human data engineers perform the same tasks.
Problem

Research questions and friction points this paper is trying to address.

data integration
end-to-end automation
large language models
pipeline configuration
human effort reduction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
End-to-End Data Integration
Automatic Pipeline Configuration
Schema Mapping
Entity Matching
🔎 Similar Papers
No similar papers found.