🤖 AI Summary
Tabular data exhibit strong heterogeneity, and existing models suffer from poor generalization and difficulty in zero-shot adaptation to new tasks. Method: We propose the Discriminative Tabular Pre-trained Transformer (TabDPT), the first framework integrating real-table-driven self-supervised pretraining with retrieval-augmented in-context learning (ICL). It introduces numerical-aware embedding and attention mechanisms, alongside a lightweight discriminative architecture. Contribution/Results: TabDPT achieves true zero-shot cross-task generalization without fine-tuning—overcoming a key bottleneck in large language models’ handling of structured numerical tables. It attains state-of-the-art zero-shot performance on the CC18 classification and CTR23 regression benchmarks. Performance scales consistently with both model and data size, while maintaining efficient inference and strong scalability.
📝 Abstract
The challenges faced by neural networks on tabular data are well-documented and have hampered the progress of tabular foundation models. Techniques leveraging in-context learning (ICL) have shown promise here, allowing for dynamic adaptation to unseen data. ICL can provide predictions for entirely new datasets without further training or hyperparameter tuning, therefore providing very fast inference when encountering a novel task. However, scaling ICL for tabular data remains an issue: approaches based on large language models cannot efficiently process numeric tables, and tabular-specific techniques have not been able to effectively harness the power of real data to improve performance and generalization. We are able to overcome these challenges by training tabular-specific ICL-based architectures on real data with self-supervised learning and retrieval, combining the best of both worlds. Our resulting model -- the Tabular Discriminative Pre-trained Transformer (TabDPT) -- achieves state-of-the-art performance on the CC18 (classification) and CTR23 (regression) benchmarks with no task-specific fine-tuning, demonstrating the adapatability and speed of ICL once the model is pre-trained. TabDPT also demonstrates strong scaling as both model size and amount of available data increase, pointing towards future improvements simply through the curation of larger tabular pre-training datasets and training larger models.