TabICL: A Tabular Foundation Model for In-Context Learning on Large Data

📅 2025-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses zero-shot classification on large-scale tabular data, proposing a scalable in-context learning (ICL) method that departs from conventional fine-tuning paradigms. The core innovation is a novel two-stage attention mechanism—“column-first → row-first”—coupled with column-wise and row-wise sequential embedding and a lightweight Transformer architecture, enabling efficient parameter-free ICL. The method supports context modeling of up to 500K samples, marking the first empirical validation of ICL’s effectiveness on tabular datasets exceeding 10,000 instances. On TALENT-200, it matches TabPFNv2’s accuracy while accelerating inference by 10×; on 56 real-world datasets each containing over 10,000 samples, it significantly outperforms both TabPFNv2 and CatBoost. This work establishes ICL as a viable, scalable alternative for zero-shot tabular classification without model adaptation.

Technology Category

Application Category

📝 Abstract
The long-standing dominance of gradient-boosted decision trees on tabular data is currently challenged by tabular foundation models using In-Context Learning (ICL): setting the training data as context for the test data and predicting in a single forward pass without parameter updates. While the very recent TabPFNv2 foundation model (2025) excels on tables with up to 10K samples, its alternating column- and row-wise attentions make handling large training sets computationally prohibitive. So, can ICL be effectively scaled and deliver a benefit for larger tables? We introduce TabICL, a tabular foundation model for classification, pretrained on synthetic datasets with up to 60K samples and capable of handling 500K samples on affordable resources. This is enabled by a novel two-stage architecture: a column-then-row attention mechanism to build fixed-dimensional embeddings of rows, followed by a transformer for efficient ICL. Across 200 classification datasets from the TALENT benchmark, TabICL is on par with TabPFNv2 while being systematically faster (up to 10 times), and significantly outperforms all other approaches. On 56 datasets with over 10K samples, TabICL surpasses both TabPFNv2 and CatBoost, demonstrating the potential of ICL for large data.
Problem

Research questions and friction points this paper is trying to address.

Scaling In-Context Learning for large tabular data.
Overcoming computational limits in tabular foundation models.
Enhancing classification accuracy on large datasets efficiently.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Column-then-row attention mechanism
Transformer for efficient ICL
Pretrained on synthetic datasets
🔎 Similar Papers
No similar papers found.
J
Jingang Qu
SODA team, INRIA Saclay, France
D
David Holzmuller
Sierra team, INRIA Paris, France; Ecole Normale Superieure, PSL Research University, Paris, France
G
Gael Varoquaux
SODA team, INRIA Saclay, France
Marine Le Morvan
Marine Le Morvan
Research scientist, INRIA Saclay