🤖 AI Summary
Existing automated data preparation tools lack robust semantic understanding and struggle with complex, context-dependent data quality issues. Method: This study investigates the efficacy of large language models (LLMs) in data profiling and cleaning on low-quality datasets. We propose a customized data quality assessment framework informed by a practitioner-focused user study, and systematically evaluate both general-purpose and fine-tuned table-centric LLMs—via prompt engineering—on tasks including anomaly detection, cleaning logic generation, and error repair, benchmarking against traditional tools (e.g., Trifacta, OpenRefine). Contribution/Results: LLMs significantly outperform conventional tools in contextual reasoning and generating interpretable, human-verifiable cleaning rules; however, their output precision and deterministic verifiability remain limited. This work establishes the first evaluation paradigm specifically designed for LLMs in data preparation and empirically validates their viability—and practical boundaries—as collaborative “data engineering partners.”
📝 Abstract
Large language models have recently demonstrated their exceptional capabilities in supporting and automating various tasks. Among the tasks worth exploring for testing large language model capabilities, we considered data preparation, a critical yet often labor-intensive step in data-driven processes. This paper investigates whether large language models can effectively support users in selecting and automating data preparation tasks. To this aim, we considered both general-purpose and fine-tuned tabular large language models. We prompted these models with poor-quality datasets and measured their ability to perform tasks such as data profiling and cleaning. We also compare the support provided by large language models with that offered by traditional data preparation tools. To evaluate the capabilities of large language models, we developed a custom-designed quality model that has been validated through a user study to gain insights into practitioners'expectations.