๐ค AI Summary
This work addresses two key challenges in applying large language models (LLMs) to heterogeneous tabular dataโcontaining textual, numeric, and categorical fields: (1) unstable numerical tokenization and (2) severe context-length limitations. To this end, we propose TabGemma, a schema-agnostic in-context learning framework. Its core contributions are: (1) unified numeric representation via signed scientific notation to enhance semantic consistency; (2) mask-filling continual pretraining on Gemma-3 (12B), incorporating a targeted imputation objective to strengthen tabular reasoning; and (3) a compact n-gram retrieval mechanism enabling up to 128K-token context windows. Experiments demonstrate that TabGemma achieves state-of-the-art performance on both low- and high-data classification benchmarks, with accuracy monotonically improving as the number of contextual rows increases. It also shows strong competitiveness in few-shot regression tasks.
๐ Abstract
We study LLMs for tabular prediction with mixed text, numeric, and categorical fields. We introduce TabGemma, a schema-agnostic in-context learner that treats rows as sequences and tackles two practical hurdles when adapting pretrained LLMs for tabular predictions: unstable numeric tokenization and limited context size. We propose to canonicalize numbers via signed scientific notation and continue pretraining of a 12B Gemma 3 model with a target imputation objective using a large-scale real world dataset. For inference, we use a compact n-gram-based retrieval to select informative exemplars that fit within a 128k-token window. On semantically rich benchmarks, TabGemma establishes a new state of the art on classification across low- and high-data regimes and improves monotonically with more context rows. For regression, it is competitive at small sample sizes but trails conventional approaches as data grows. Our results show that LLMs can be effective tabular in-context learners on highly semantic tasks when paired with dedicated numeric handling and context retrieval, while motivating further advances in numeric modeling and long-context scaling.