PanFoMa: A Lightweight Foundation Model and Benchmark for Pan-Cancer

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of discriminability, efficiency, and standardized evaluation in representation learning for pan-cancer single-cell transcriptomics, this paper introduces PanFoMa—a lightweight foundation model—and PanFoMaBench, the first dedicated pan-cancer benchmark. PanFoMa innovatively integrates Transformer architectures (for local contextual encoding) with the linear time-invariant state-space model Mamba (for global sequence modeling), employing shared self-attention mechanisms and a modular design trained on rigorously quality-controlled data. Experimental results demonstrate that PanFoMa achieves a 4.0% improvement over state-of-the-art models on PanFoMaBench. It further yields gains of 7.4%, 4.0%, and 3.1% in cell-type annotation, batch integration, and multi-omics integration tasks, respectively. These advances significantly enhance both the accuracy and scalability of cross-cancer heterogeneity resolution.

Technology Category

Application Category

📝 Abstract
Single-cell RNA sequencing (scRNA-seq) is essential for decoding tumor heterogeneity. However, pan-cancer research still faces two key challenges: learning discriminative and efficient single-cell representations, and establishing a comprehensive evaluation benchmark. In this paper, we introduce PanFoMa, a lightweight hybrid neural network that combines the strengths of Transformers and state-space models to achieve a balance between performance and efficiency. PanFoMa consists of a front-end local-context encoder with shared self-attention layers to capture complex, order-independent gene interactions; and a back-end global sequential feature decoder that efficiently integrates global context using a linear-time state-space model. This modular design preserves the expressive power of Transformers while leveraging the scalability of Mamba to enable transcriptome modeling, effectively capturing both local and global regulatory signals. To enable robust evaluation, we also construct a large-scale pan-cancer single-cell benchmark, PanFoMaBench, containing over 3.5 million high-quality cells across 33 cancer subtypes, curated through a rigorous preprocessing pipeline. Experimental results show that PanFoMa outperforms state-of-the-art models on our pan-cancer benchmark (+4.0%) and across multiple public tasks, including cell type annotation (+7.4%), batch integration (+4.0%) and multi-omics integration (+3.1%). The code is available at https://github.com/Xiaoshui-Huang/PanFoMa.
Problem

Research questions and friction points this paper is trying to address.

Develops a lightweight model for efficient single-cell representation learning.
Creates a comprehensive benchmark for pan-cancer scRNA-seq evaluation.
Integrates Transformer and state-space models to capture gene interactions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight hybrid network combining Transformers and state-space models
Modular design with local-context encoder and global sequential decoder
Large-scale pan-cancer benchmark with over 3.5 million cells