🤖 AI Summary
To address the lack of discriminability, efficiency, and standardized evaluation in representation learning for pan-cancer single-cell transcriptomics, this paper introduces PanFoMa—a lightweight foundation model—and PanFoMaBench, the first dedicated pan-cancer benchmark. PanFoMa innovatively integrates Transformer architectures (for local contextual encoding) with the linear time-invariant state-space model Mamba (for global sequence modeling), employing shared self-attention mechanisms and a modular design trained on rigorously quality-controlled data. Experimental results demonstrate that PanFoMa achieves a 4.0% improvement over state-of-the-art models on PanFoMaBench. It further yields gains of 7.4%, 4.0%, and 3.1% in cell-type annotation, batch integration, and multi-omics integration tasks, respectively. These advances significantly enhance both the accuracy and scalability of cross-cancer heterogeneity resolution.
📝 Abstract
Single-cell RNA sequencing (scRNA-seq) is essential for decoding tumor heterogeneity. However, pan-cancer research still faces two key challenges: learning discriminative and efficient single-cell representations, and establishing a comprehensive evaluation benchmark. In this paper, we introduce PanFoMa, a lightweight hybrid neural network that combines the strengths of Transformers and state-space models to achieve a balance between performance and efficiency. PanFoMa consists of a front-end local-context encoder with shared self-attention layers to capture complex, order-independent gene interactions; and a back-end global sequential feature decoder that efficiently integrates global context using a linear-time state-space model. This modular design preserves the expressive power of Transformers while leveraging the scalability of Mamba to enable transcriptome modeling, effectively capturing both local and global regulatory signals. To enable robust evaluation, we also construct a large-scale pan-cancer single-cell benchmark, PanFoMaBench, containing over 3.5 million high-quality cells across 33 cancer subtypes, curated through a rigorous preprocessing pipeline. Experimental results show that PanFoMa outperforms state-of-the-art models on our pan-cancer benchmark (+4.0%) and across multiple public tasks, including cell type annotation (+7.4%), batch integration (+4.0%) and multi-omics integration (+3.1%). The code is available at https://github.com/Xiaoshui-Huang/PanFoMa.