🤖 AI Summary
Current single-cell foundation models underperform specialized models on downstream tasks, hindering disease mechanism dissection and drug discovery. To address this, we propose a family of general-purpose foundation models tailored for single-cell biological understanding. Our approach is the first to integrate ultra-large-scale single-cell data (116 million cells) with biologically informed, annotation-guided supervised pretraining, enabling systematic characterization of predictable performance gains from data volume and model parameter count. Leveraging a Transformer architecture, we develop six multi-scale models (70M–400M parameters) and introduce cell-level phenotypic annotations to enrich pretraining objectives. On unseen donor disease-state identification—a key clinical challenge—our models substantially outperform state-of-the-art methods. Moreover, they achieve robust improvements in critical tasks such as healthy versus diseased cell classification, significantly enhancing cross-individual and cross-state generalization.
📝 Abstract
Understanding the biological mechanism of disease is critical for medicine, and in particular drug discovery. AI-powered analysis of genome-scale biological data hold great potential in this regard. The increasing availability of single-cell RNA sequencing data has enabled the development of large foundation models for disease biology. However, existing foundation models either do not improve or only modestly improve over task-specific models in downstream applications. Here, we explored two avenues for improving the state-of-the-art. First, we scaled the pre-training dataset to 116 million cells, which is larger than those used by previous models. Second, we leveraged the availability of large-scale biological annotations as a form of supervision during pre-training. We trained the TEDDY family of models comprising six transformer-based state-of-the-art single-cell foundation models with 70 million, 160 million, and 400 million parameters. We vetted our models on two downstream evaluation tasks -- identifying the underlying disease state of held-out donors not seen during training and distinguishing healthy cells from diseased ones for disease conditions and donors not seen during training. Scaling experiments showed that performance improved predictably with both data volume and parameter count. Our models showed substantial improvement over existing work on the first task and more muted improvements on the second.