🤖 AI Summary
Effective modeling frameworks for transfer learning in probabilistic regression have long been lacking. This paper proposes NIAQUE—the first pretraining-finetuning paradigm specifically designed for probabilistic regression. Built upon a permutation-invariant neural architecture, NIAQUE jointly models quantile regression and cross-task knowledge transfer, enabling interpretable estimation of arbitrary quantiles. It is the first work to systematically introduce pretraining into probabilistic regression, performing joint pretraining on diverse, heterogeneous regression datasets and adapting to downstream tasks via lightweight fine-tuning. Extensive experiments demonstrate that NIAQUE significantly outperforms state-of-the-art baselines—including XGBoost, TabPFN, and TabDPT—across multiple real-world regression benchmarks and Kaggle competitions. These results validate its robustness, generalization capability, and scalability.
📝 Abstract
Transfer learning for probabilistic regression remains underexplored. This work closes this gap by introducing NIAQUE, Neural Interpretable Any-Quantile Estimation, a new model designed for transfer learning in probabilistic regression through permutation invariance. We demonstrate that pre-training NIAQUE directly on diverse downstream regression datasets and fine-tuning it on a specific target dataset enhances performance on individual regression tasks, showcasing the positive impact of probabilistic transfer learning. Furthermore, we highlight the effectiveness of NIAQUE in Kaggle competitions against strong baselines involving tree-based models and recent neural foundation models TabPFN and TabDPT. The findings highlight NIAQUE's efficacy as a robust and scalable framework for probabilistic regression, leveraging transfer learning to enhance predictive performance.