SCALE:Scalable Conditional Atlas-Level Endpoint transport for virtual cell perturbation prediction

๐Ÿ“… 2026-03-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses key limitations in current virtual cell perturbation prediction methodsโ€”namely, inefficient training and inference pipelines, modeling instability under high-dimensional sparse representations, and evaluation metrics overly reliant on reconstruction accuracy at the expense of biological fidelity. To overcome these challenges, we propose a perturbation modeling framework grounded in conditional optimal transport, integrating a set-aware flow architecture with endpoint-guided supervision to jointly enhance training stability and perturbation effect recovery. Leveraging an efficient BioNeMo-based training-inference system and an LLaMA-driven cellular encoder, our approach achieves substantial gains in scalability. On the Tahoe-100M benchmark, it outperforms the STATE model by 12.02% in PDCorr and 10.66% in DE Overlap, while accelerating pretraining by 12.51ร— and inference by 1.29ร—.

Technology Category

Application Category

๐Ÿ“ Abstract
Virtual cell models aim to enable in silico experimentation by predicting how cells respond to genetic, chemical, or cytokine perturbations from single-cell measurements. In practice, however, large-scale perturbation prediction remains constrained by three coupled bottlenecks: inefficient training and inference pipelines, unstable modeling in high-dimensional sparse expression space, and evaluation protocols that overemphasize reconstruction-like accuracy while underestimating biological fidelity. In this work we present a specialized large-scale foundation model SCALE for virtual cell perturbation prediction that addresses the above limitations jointly. First, we build a BioNeMo-based training and inference framework that substantially improves data throughput, distributed scalability, and deployment efficiency, yielding 12.51* speedup on pretrain and 1.29* on inference over the prior SOTA pipeline under matched system settings. Second, we formulate perturbation prediction as conditional transport and implement it with a set-aware flow architecture that couples LLaMA-based cellular encoding with endpoint-oriented supervision. This design yields more stable training and stronger recovery of perturbation effects. Third, we evaluate the model on Tahoe-100M using a rigorous cell-level protocol centered on biologically meaningful metrics rather than reconstruction alone. On this benchmark, our model improves PDCorr by 12.02% and DE Overlap by 10.66% over STATE. Together, these results suggest that advancing virtual cells requires not only better generative objectives, but also the co-design of scalable infrastructure, stable transport modeling, and biologically faithful evaluation.
Problem

Research questions and friction points this paper is trying to address.

virtual cell perturbation prediction
scalable training and inference
high-dimensional sparse expression
biological fidelity evaluation
conditional transport
Innovation

Methods, ideas, or system contributions that make the work stand out.

conditional transport
scalable foundation model
virtual cell perturbation
set-aware flow architecture
biologically faithful evaluation
๐Ÿ”Ž Similar Papers
No similar papers found.
S
Shuizhou Chen
Shanghai Artificial Intelligence Laboratory; School of Computer Engineering & Science, Shanghai University
Lang Yu
Lang Yu
East China Normal University
Machine LearningDeep Learning
K
Kedu Jin
Shanghai Artificial Intelligence Laboratory; School of Life Science and Technology, China Pharmaceutical University
Songming Zhang
Songming Zhang
Beijing Jiaotong University
natural language processingtext generationmachine translation
H
Hao Wu
Shanghai Artificial Intelligence Laboratory
Wenxuan Huang
Wenxuan Huang
CUHK & ECNU
Artificial General IntelligenceMLLMLLMAIGCModel Acceleration
S
Sheng Xu
Shanghai Artificial Intelligence Laboratory
Q
Quan Qian
Shanghai Artificial Intelligence Laboratory; School of Computer Engineering & Science, Shanghai University
Qin Chen
Qin Chen
East China Normal University
Natural Language ProcessingQuestion AnsweringLarge Language Model
Lei Bai
Lei Bai
Shanghai AI Laboratory
Foundation ModelScience IntelligenceMulti-Agent SystemAutonomous Discovery
Siqi Sun
Siqi Sun
Associate Professor; Fudan University, Shanghai AI Lab
deep learningAI for Science
Z
Zhangyang Gao
Shanghai Artificial Intelligence Laboratory