LakeMLB: Data Lake Machine Learning Benchmark

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of a standardized evaluation benchmark for multi-source, multi-table machine learning tasks in data lake environments. To bridge this gap, we propose LakeMLB—the first benchmark specifically designed for multi-table scenarios in data lakes—focusing on two canonical operations: Union and Join. LakeMLB encompasses six real-world heterogeneous datasets spanning government, finance, Wikipedia, and e-commerce domains, and supports three integration strategies: pretraining, data augmentation, and feature augmentation. Through systematic evaluation of state-of-the-art tabular learning methods, our benchmark comprehensively reveals their performance characteristics in complex data lake settings. All datasets and code have been open-sourced, thereby filling a critical evaluation void and significantly enhancing reproducibility and comparability in this emerging research area.

Technology Category

Application Category

📝 Abstract
Modern data lakes have emerged as foundational platforms for large-scale machine learning, enabling flexible storage of heterogeneous data and structured analytics through table-oriented abstractions. Despite their growing importance, standardized benchmarks for evaluating machine learning performance in data lake environments remain scarce. To address this gap, we present LakeMLB (Data Lake Machine Learning Benchmark), designed for the most common multi-source, multi-table scenarios in data lakes. LakeMLB focuses on two representative multi-table scenarios, Union and Join, and provides three real-world datasets for each scenario, covering government open data, finance, Wikipedia, and online marketplaces. The benchmark supports three representative integration strategies: pre-training-based, data augmentation-based, and feature augmentation-based approaches. We conduct extensive experiments with state-of-the-art tabular learning methods, offering insights into their performance under complex data lake scenarios. We release both datasets and code to facilitate rigorous research on machine learning in data lake ecosystems; the benchmark is available at https://github.com/zhengwang100/LakeMLB.
Problem

Research questions and friction points this paper is trying to address.

data lake
machine learning benchmark
multi-table scenarios
standardized evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

data lake
machine learning benchmark
multi-table learning
table union
table join
🔎 Similar Papers
No similar papers found.
F
Feiyu Pan
Shanghai Jiao Tong Univ.
T
Tianbin Zhang
Shanghai Jiao Tong Univ.
A
Aoqian Zhang
Beijing Inst. of Technology
Yu Sun
Yu Sun
Nankai University
Data QualityData CleaningData Integration
Z
Zheng Wang
Shanghai Jiao Tong Univ.
Lixing Chen
Lixing Chen
Associate Professor, Shanghai Jiao Tong University
AI for NetworkingCybersecurity
Li Pan
Li Pan
Shanghai Jiao Tong University
J
Jianhua Li
Shanghai Jiao Tong Univ.