🤖 AI Summary
TabPFN suffers from weak generalization, poor downstream adaptability, bias-variance imbalance, and low inference efficiency in high-dimensional and large-scale tabular classification tasks. To address these limitations, we propose Beta: a novel framework featuring a lightweight encoder alignment mechanism and a multi-encoder Bagging ensemble, synergistically reducing both bias and variance without increasing inference overhead—enabled by a bootstrap sampling strategy. Furthermore, Beta employs end-to-end fine-tuning of the pre-trained TabPFN model to enhance feature representation robustness and dataset-specific adaptability. Extensive experiments across 200+ benchmark datasets demonstrate that Beta significantly improves classification accuracy and stability in high-dimensional and large-scale settings, consistently matching or surpassing state-of-the-art methods.
📝 Abstract
TabPFN has emerged as a promising in-context learning model for tabular data, capable of directly predicting the labels of test samples given labeled training examples. It has demonstrated competitive performance, particularly on small-scale classification tasks. However, despite its effectiveness, TabPFN still requires further refinement in several areas, including handling high-dimensional features, aligning with downstream datasets, and scaling to larger datasets. In this paper, we revisit existing variants of TabPFN and observe that most approaches focus either on reducing bias or variance, often neglecting the need to address the other side, while also increasing inference overhead. To fill this gap, we propose Beta (Bagging and Encoder-based Fine-tuning for TabPFN Adaptation), a novel and effective method designed to minimize both bias and variance. To reduce bias, we introduce a lightweight encoder to better align downstream tasks with the pre-trained TabPFN. By increasing the number of encoders in a lightweight manner, Beta mitigate variance, thereby further improving the model's performance. Additionally, bootstrapped sampling is employed to further reduce the impact of data perturbations on the model, all while maintaining computational efficiency during inference. Our approach enhances TabPFN's ability to handle high-dimensional data and scale to larger datasets. Experimental results on over 200 benchmark classification datasets demonstrate that Beta either outperforms or matches state-of-the-art methods.