🤖 AI Summary
This paper addresses prevalent data leakage issues in Jupyter-based machine learning pipelines—including train-test contamination, preprocessing leakage, and multiple testing leakage—by proposing a hybrid static code analysis and large language model (LLM)-assisted detection and repair framework. Methodologically, it introduces multi-pattern leakage detection algorithms and implements them in LeakageDetector, a lightweight VS Code extension enabling real-time static analysis; it further integrates manual quick fixes with LLM-driven, context-aware repair suggestions. Experiments demonstrate high accuracy in identifying common leakage patterns, while interactive guidance significantly reduces fix errors and improves ML code correctness and evaluation reliability. The primary contribution is the first lightweight, extensible, and Jupyter-native system for automated data leakage detection and intelligent repair—uniquely balancing practical usability with robust automation.
📝 Abstract
In software development environments, code quality is crucial. This study aims to assist Machine Learning (ML) engineers in enhancing their code by identifying and correcting Data Leakage issues within their models. Data Leakage occurs when information from the test dataset is inadvertently included in the training data when preparing a data science model, resulting in misleading performance evaluations. ML developers must carefully separate their data into training, evaluation, and test sets to avoid introducing Data Leakage into their code. In this paper, we develop a new Visual Studio Code (VS Code) extension, called LeakageDetector, that detects Data Leakage, mainly Overlap, Preprocessing and Multi-test leakage, from Jupyter Notebook files. Beyond detection, we included two correction mechanisms: a conventional approach, known as a quick fix, which manually fixes the leakage, and an LLM-driven approach that guides ML developers toward best practices for building ML pipelines.