🤖 AI Summary
This work addresses the limitations of traditional tree-based models and large language models (LLMs) in few-shot tabular learning: conventional trees suffer from overfitting due to unstable statistical purity measures, while LLMs applied directly to tabular data often neglect structural properties and underperform. To overcome these issues, the authors propose FORESTLLM, a novel framework that leverages an LLM as an offline semantic guide during training to inform a lightweight random forest. This forest incorporates a semantic splitting criterion and a context-aware leaf stabilization mechanism. Notably, the LLM is not invoked during inference, ensuring computational efficiency and model interpretability. By effectively integrating the structural inductive bias of decision forests with the semantic reasoning capabilities of LLMs, FORESTLLM achieves state-of-the-art performance across multiple few-shot classification and regression benchmarks.
📝 Abstract
Tabular data high-stakes critical decision-making in domains such as finance, healthcare, and scientific discovery. Yet, learning effectively from tabular data in few-shot settings, where labeled examples are scarce, remains a fundamental challenge. Traditional tree-based methods often falter in these regimes due to their reliance on statistical purity metrics, which become unstable and prone to overfitting with limited supervision. At the same time, direct applications of large language models (LLMs) often overlook its inherent structure, leading to suboptimal performance. To overcome these limitations, we propose FORESTLLM, a novel framework that unifies the structural inductive biases of decision forests with the semantic reasoning capabilities of LLMs. Crucially, FORESTLLM leverages the LLM only during training, treating it as an offline model designer that encodes rich, contextual knowledge into a lightweight, interpretable forest model, eliminating the need for LLM inference at test time. Our method is two-fold. First, we introduce a semantic splitting criterion in which the LLM evaluates candidate partitions based on their coherence over both labeled and unlabeled data, enabling the induction of more robust and generalizable tree structures under few-shot supervision. Second, we propose a one-time in-context inference mechanism for leaf node stabilization, where the LLM distills the decision path and its supporting examples into a concise, deterministic prediction, replacing noisy empirical estimates with semantically informed outputs. Across a diverse suite of few-shot classification and regression benchmarks, FORESTLLM achieves state-of-the-art performance.