Lost in the Pipeline: How Well Do Large Language Models Handle Data Preparation?

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing automated data preparation tools lack robust semantic understanding and struggle with complex, context-dependent data quality issues. Method: This study investigates the efficacy of large language models (LLMs) in data profiling and cleaning on low-quality datasets. We propose a customized data quality assessment framework informed by a practitioner-focused user study, and systematically evaluate both general-purpose and fine-tuned table-centric LLMs—via prompt engineering—on tasks including anomaly detection, cleaning logic generation, and error repair, benchmarking against traditional tools (e.g., Trifacta, OpenRefine). Contribution/Results: LLMs significantly outperform conventional tools in contextual reasoning and generating interpretable, human-verifiable cleaning rules; however, their output precision and deterministic verifiability remain limited. This work establishes the first evaluation paradigm specifically designed for LLMs in data preparation and empirically validates their viability—and practical boundaries—as collaborative “data engineering partners.”

Technology Category

Application Category

📝 Abstract
Large language models have recently demonstrated their exceptional capabilities in supporting and automating various tasks. Among the tasks worth exploring for testing large language model capabilities, we considered data preparation, a critical yet often labor-intensive step in data-driven processes. This paper investigates whether large language models can effectively support users in selecting and automating data preparation tasks. To this aim, we considered both general-purpose and fine-tuned tabular large language models. We prompted these models with poor-quality datasets and measured their ability to perform tasks such as data profiling and cleaning. We also compare the support provided by large language models with that offered by traditional data preparation tools. To evaluate the capabilities of large language models, we developed a custom-designed quality model that has been validated through a user study to gain insights into practitioners'expectations.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' effectiveness in data preparation tasks
Comparing LLM support with traditional data preparation tools
Assessing LLMs' ability in data profiling and cleaning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating LLMs for data profiling and cleaning tasks
Comparing LLM performance with traditional data preparation tools
Developing a custom quality model validated through user study
🔎 Similar Papers
No similar papers found.
M
Matteo Spreafico
Politecnico di Milano, Milan, Italy
L
Ludovica Tassini
Politecnico di Milano, Milan, Italy
C
Camilla Sancricca
Politecnico di Milano, Milan, Italy
Cinzia Cappiello
Cinzia Cappiello
Dipartimento di Elettronica, Informazione e Bioingegneria - Politecnico di Milano
Information Systems EngineeringData QualityData ManagementData PreparationData-Centric AI