π€ AI Summary
This work addresses the challenge of modeling cross-table relational tabular data lacking shared features and pre-aligned labels. We propose Latent-entity Alignment Learning (Leal), a novel framework that introduces a loss-driven soft alignment mechanism and a differentiable clustering samplerβboth firsts in this domain. Theoretically, we prove that Leal approximates the true alignment distribution and enables end-to-end joint optimization. By leveraging contrastive loss to guide alignment learning, Leal achieves up to 26.8% improvement over state-of-the-art methods across five real-world and five synthetic datasets. It significantly enhances predictive performance while demonstrating superior scalability to large-scale multi-table settings.
π Abstract
Learning relational tabular data has gained significant attention recently, but most studies focus on single tables, overlooking the potential of cross-table learning. Cross-table learning, especially in scenarios where tables lack shared features and pre-aligned data, offers vast opportunities but also introduces substantial challenges. The alignment space is immense, and determining accurate alignments between tables is highly complex. We propose Latent Entity Alignment Learning (Leal), a novel framework enabling effective cross-table training without requiring shared features or pre-aligned data. Leal operates on the principle that properly aligned data yield lower loss than misaligned data, a concept embodied in its soft alignment mechanism. This mechanism is coupled with a differentiable cluster sampler module, ensuring efficient scaling to large relational tables. Furthermore, we provide a theoretical proof of the cluster sampler's approximation capacity. Extensive experiments on five real-world and five synthetic datasets show that Leal achieves up to a 26.8% improvement in predictive performance compared to state-of-the-art methods, demonstrating its effectiveness and scalability.