🤖 AI Summary
To address the semantic discovery of massive heterogeneous tables in data lakes, this paper proposes a sketch-enhanced neural representation learning framework that unifies modeling of unionable, joinable, and subset relationships among tables. Methodologically, it introduces the first task-aware, multi-granularity table sketches—integrating row/column statistics, schema structures, and value distributions—into table representations, and designs a sketch ablation analysis framework. The model is pretrained via contrastive learning and fine-tuned jointly across multiple relationship prediction tasks to ensure generalization. Evaluated on multiple benchmarks, it achieves significant F1 improvements over state-of-the-art methods; notably enhances table retrieval performance; and demonstrates strong cross-data-lake and cross-task transferability. The core contributions are: (i) the first sketch-enhanced pretraining paradigm for table relationship identification, enabling efficiency, generality, and interpretability; and (ii) a unified, extensible framework for semantic table discovery in heterogeneous data lake environments.
📝 Abstract
Enterprises have a growing need to identify relevant tables in data lakes; e.g. tables that are unionable, joinable, or subsets of each other. Tabular neural models can be help-ful for such data discovery tasks. In this paper, we present TabSketchFM, a neural tabular model for data discovery over data lakes. First, we propose novel pre-training: a sketch-based approach to enhance the effectiveness of data discovery in neural tabular models. Second, we finetune the pretrained model for identifying unionable, joinable, and subset table pairs and show significant improvement over previous tabular neural models. Third, we present a detailed ablation study to highlight which sketches are crucial for which tasks. Fourth, we use these finetuned models to perform table search; i.e., given a query table, find other tables in a corpus that are unionable, joinable, or that are subsets of the query. Our results demonstrate significant improvements in F1 scores for search compared to state-of-the-art techniques. Finally, we show significant transfer across datasets and tasks establishing that our model can generalize across different tasks and over different data lakes.