Learning Relational Tabular Data without Shared Features

πŸ“… 2025-02-14
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of modeling cross-table relational tabular data lacking shared features and pre-aligned labels. We propose Latent-entity Alignment Learning (Leal), a novel framework that introduces a loss-driven soft alignment mechanism and a differentiable clustering samplerβ€”both firsts in this domain. Theoretically, we prove that Leal approximates the true alignment distribution and enables end-to-end joint optimization. By leveraging contrastive loss to guide alignment learning, Leal achieves up to 26.8% improvement over state-of-the-art methods across five real-world and five synthetic datasets. It significantly enhances predictive performance while demonstrating superior scalability to large-scale multi-table settings.

Technology Category

Application Category

πŸ“ Abstract
Learning relational tabular data has gained significant attention recently, but most studies focus on single tables, overlooking the potential of cross-table learning. Cross-table learning, especially in scenarios where tables lack shared features and pre-aligned data, offers vast opportunities but also introduces substantial challenges. The alignment space is immense, and determining accurate alignments between tables is highly complex. We propose Latent Entity Alignment Learning (Leal), a novel framework enabling effective cross-table training without requiring shared features or pre-aligned data. Leal operates on the principle that properly aligned data yield lower loss than misaligned data, a concept embodied in its soft alignment mechanism. This mechanism is coupled with a differentiable cluster sampler module, ensuring efficient scaling to large relational tables. Furthermore, we provide a theoretical proof of the cluster sampler's approximation capacity. Extensive experiments on five real-world and five synthetic datasets show that Leal achieves up to a 26.8% improvement in predictive performance compared to state-of-the-art methods, demonstrating its effectiveness and scalability.
Problem

Research questions and friction points this paper is trying to address.

Handling cross-table learning without shared features
Addressing alignment challenges in relational tabular data
Improving predictive performance in large-scale tabular datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent Entity Alignment Learning
Soft alignment mechanism
Differentiable cluster sampler
Zhaomin Wu
Zhaomin Wu
Research Fellow at NUS
Trustworthy AIFederated LearningMachine Unlearning
Shida Wang
Shida Wang
National University of Singapore
Sequence ModellingLarge Language Model
Z
Ziyang Wang
Department of Computer Science, National University of Singapore, Singapore
B
Bingsheng He
Department of Computer Science, National University of Singapore, Singapore