LeakageDetector 2.0: Analyzing Data Leakage in Jupyter-Driven Machine Learning Pipelines

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses prevalent data leakage issues in Jupyter-based machine learning pipelines—including train-test contamination, preprocessing leakage, and multiple testing leakage—by proposing a hybrid static code analysis and large language model (LLM)-assisted detection and repair framework. Methodologically, it introduces multi-pattern leakage detection algorithms and implements them in LeakageDetector, a lightweight VS Code extension enabling real-time static analysis; it further integrates manual quick fixes with LLM-driven, context-aware repair suggestions. Experiments demonstrate high accuracy in identifying common leakage patterns, while interactive guidance significantly reduces fix errors and improves ML code correctness and evaluation reliability. The primary contribution is the first lightweight, extensible, and Jupyter-native system for automated data leakage detection and intelligent repair—uniquely balancing practical usability with robust automation.

Technology Category

Application Category

📝 Abstract
In software development environments, code quality is crucial. This study aims to assist Machine Learning (ML) engineers in enhancing their code by identifying and correcting Data Leakage issues within their models. Data Leakage occurs when information from the test dataset is inadvertently included in the training data when preparing a data science model, resulting in misleading performance evaluations. ML developers must carefully separate their data into training, evaluation, and test sets to avoid introducing Data Leakage into their code. In this paper, we develop a new Visual Studio Code (VS Code) extension, called LeakageDetector, that detects Data Leakage, mainly Overlap, Preprocessing and Multi-test leakage, from Jupyter Notebook files. Beyond detection, we included two correction mechanisms: a conventional approach, known as a quick fix, which manually fixes the leakage, and an LLM-driven approach that guides ML developers toward best practices for building ML pipelines.
Problem

Research questions and friction points this paper is trying to address.

Detecting data leakage in Jupyter ML pipelines
Identifying overlap, preprocessing, and multi-test leakage
Providing correction mechanisms for leakage issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

VS Code extension for leakage detection
Identifies overlap, preprocessing, multi-test leaks
Offers manual and LLM-driven correction mechanisms
🔎 Similar Papers
No similar papers found.
O
Owen Truong
Stevens Institute of Technology, Hoboken, New Jersey, USA
T
Terrence Zhang
Stevens Institute of Technology, Hoboken, New Jersey, USA
A
Arnav Marchareddy
Stevens Institute of Technology, Hoboken, New Jersey, USA
R
Ryan Lee
Stevens Institute of Technology, Hoboken, New Jersey, USA
J
Jeffery Busold
Stevens Institute of Technology, Hoboken, New Jersey, USA
M
Michael Socas
Stevens Institute of Technology, Hoboken, New Jersey, USA
Eman Abdullah AlOmar
Eman Abdullah AlOmar
Stevens Institute of Technology
Software EngineeringSoftware QualityRefactoringArtificial IntelligenceLarge Language Models