🤖 AI Summary
This work addresses the limitation of traditional tabular learning methods, which assume row independence and thus struggle to capture cross-row label dependencies prevalent in transactional, temporal, or relational tables. To overcome this, the authors propose Grables, a novel framework that formalizes how tabular structure can be leveraged by transforming tables into graph representations through a modular design. This approach decouples graph construction from node prediction logic, enabling explicit modeling of inter-row dependencies. By integrating graph neural networks, message-passing mechanisms, and powerful tabular learners, Grables significantly outperforms baseline models that rely solely on intra-row features across synthetic datasets, real-world transaction records, and the RelBench clinical trial benchmark, demonstrating its superior representational capacity and architectural flexibility.
📝 Abstract
Tabular learning is still dominated by row-wise predictors that score each row independently, which fits i.i.d. benchmarks but fails on transactional, temporal, and relational tables where labels depend on other rows. We show that row-wise prediction rules out natural targets driven by global counts, overlaps, and relational patterns. To make"using structure"precise across architectures, we introduce grables: a modular interface that separates how a table is lifted to a graph (constructor) from how predictions are computed on that graph (node predictor), pinpointing where expressive power comes from. Experiments on synthetic tasks, transaction data, and a RelBench clinical-trials dataset confirm the predicted separations: message passing captures inter-row dependencies that row-local models miss, and hybrid approaches that explicitly extract inter-row structure and feed it to strong tabular learners yield consistent gains.