🤖 AI Summary
Existing large language models (LLMs) exhibit limited performance on tabular prediction tasks—including classification, regression, and missing-value imputation—primarily due to their lack of native modeling capacity for structured semantics and numerical relationships.
Method: We propose the first large-scale instruction-tuning paradigm tailored for tabular understanding, built upon Llama-2. It leverages a comprehensively curated, multi-scenario instruction-based tabular corpus and systematically applies supervised fine-tuning and in-context learning adaptation.
Contribution/Results: We demonstrate, for the first time, that a single LLM can robustly and uniformly address all three core tabular prediction tasks under zero-shot, few-shot, and in-context learning settings. Our approach achieves significant improvements over traditional models (e.g., XGBoost, TabPFN) and prior LLM-based methods across 12 benchmarks—averaging +8.2% accuracy, +6.5% R², and −12.4% imputation MAE—establishing a new state-of-the-art for LLM-driven structured data intelligence.
📝 Abstract
In the domain of data science, the predictive tasks of classification, regression, and imputation of missing values are commonly encountered challenges associated with tabular data. This research endeavors to apply Large Language Models (LLMs) towards addressing these predictive tasks. Despite their proficiency in comprehending natural language, LLMs fall short in dealing with structured tabular data. This limitation stems from their lacking exposure to the intricacies of tabular data during their foundational training. Our research aims to mitigate this gap by compiling a comprehensive corpus of tables annotated with instructions and executing large-scale training of Llama-2 on this enriched dataset. Furthermore, we investigate the practical application of applying the trained model to zero-shot prediction, few-shot prediction, and in-context learning scenarios. Through extensive experiments, our methodology has shown significant improvements over existing benchmarks. These advancements highlight the efficacy of tailoring LLM training to solve table-related problems in data science, thereby establishing a new benchmark in the utilization of LLMs for enhancing tabular intelligence.