🤖 AI Summary
Conventional tabular foundation models (e.g., TabPFN) fail on ultra-high-dimensional, sparse biomedical data (features >50K, samples extremely limited), as they are constrained to <500 features and lack intrinsic interpretability.
Method: We propose the first continual pretraining framework for tabular foundation models tailored to extreme high-dimensional settings. It leverages customized prior distributions to generate synthetic data and incorporates noise-robust training to extend TabPFN’s input capacity beyond 50,000 dimensions while fully preserving its feature importance analysis capability.
Results: On real-world molecular–pathological association tasks, our model matches or surpasses the original TabPFN in predictive performance. Identified biomarkers strongly align with established biological knowledge, and novel candidate mechanisms are uncovered. This work establishes a new paradigm for high-throughput biomedical discovery—scalable, interpretable, and foundation-model-driven.
📝 Abstract
Revealing novel insights from the relationship between molecular measurements and pathology remains a very impactful application of machine learning in biomedicine. Data in this domain typically contain only a few observations but thousands of potentially noisy features, posing challenges for conventional machine learning approaches. While prior-data fitted networks emerge as foundation models for tabular data, they are currently not suited to handle large feature counts (>500). Although feature reduction enables their application, it hinders feature importance analysis. We propose a strategy that extends existing models through continued pre-training on synthetic data sampled from a customized prior. The resulting model, TabPFN-Wide, matches or exceeds its base model's performance while exhibiting improved robustness to noise. It seamlessly scales beyond 50,000 features, regardless of noise levels, while maintaining inherent interpretability, which is critical for biomedical applications. Our results show that prior-informed adaptation is suitable to enhance the capability of foundation models for high-dimensional data. On real-world biomedical datasets many of the most relevant features identified by the model overlap with previous biological findings, while others propose potential starting points for future studies.