TabGemma: Text-Based Tabular ICL via LLM using Continued Pretraining and Retrieval

๐Ÿ“… 2025-11-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses two key challenges in applying large language models (LLMs) to heterogeneous tabular dataโ€”containing textual, numeric, and categorical fields: (1) unstable numerical tokenization and (2) severe context-length limitations. To this end, we propose TabGemma, a schema-agnostic in-context learning framework. Its core contributions are: (1) unified numeric representation via signed scientific notation to enhance semantic consistency; (2) mask-filling continual pretraining on Gemma-3 (12B), incorporating a targeted imputation objective to strengthen tabular reasoning; and (3) a compact n-gram retrieval mechanism enabling up to 128K-token context windows. Experiments demonstrate that TabGemma achieves state-of-the-art performance on both low- and high-data classification benchmarks, with accuracy monotonically improving as the number of contextual rows increases. It also shows strong competitiveness in few-shot regression tasks.

Technology Category

Application Category

๐Ÿ“ Abstract
We study LLMs for tabular prediction with mixed text, numeric, and categorical fields. We introduce TabGemma, a schema-agnostic in-context learner that treats rows as sequences and tackles two practical hurdles when adapting pretrained LLMs for tabular predictions: unstable numeric tokenization and limited context size. We propose to canonicalize numbers via signed scientific notation and continue pretraining of a 12B Gemma 3 model with a target imputation objective using a large-scale real world dataset. For inference, we use a compact n-gram-based retrieval to select informative exemplars that fit within a 128k-token window. On semantically rich benchmarks, TabGemma establishes a new state of the art on classification across low- and high-data regimes and improves monotonically with more context rows. For regression, it is competitive at small sample sizes but trails conventional approaches as data grows. Our results show that LLMs can be effective tabular in-context learners on highly semantic tasks when paired with dedicated numeric handling and context retrieval, while motivating further advances in numeric modeling and long-context scaling.
Problem

Research questions and friction points this paper is trying to address.

Addresses unstable numeric tokenization in tabular prediction with LLMs
Overcomes limited context size for tabular in-context learning
Improves tabular classification performance across varied data regimes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continued pretraining with target imputation objective
Canonicalized numbers via signed scientific notation
Compact n-gram retrieval for informative exemplars
G
Gunther Schindler
SAP SE
Maximilian Schambach
Maximilian Schambach
Senior AI Scientist @SAP
deep learningmachine learningtabular data
M
Michael Medek
SAP SE
S
Sam Thelin
SAP SE