🤖 AI Summary
Current large language models (LLMs) and agent-based models in single-cell biology suffer from fragmentation across data modalities, architectures, and evaluation criteria. Method: We systematically review 58 models and propose a unified classification framework encompassing six categories—foundation models, text-bridged models, spatial models, multimodal models, epigenomic models, and agent models—covering RNA, ATAC, multi-omics, and spatial modalities, and supporting eight core tasks including annotation and trajectory inference. We introduce the paradigm of “single-cell language-driven intelligence,” establishing cross-dataset–model–evaluation linkages and defining ten domain-specific evaluation dimensions (e.g., biological interpretability, multi-omics alignment, fairness, and privacy preservation). Results: Evaluated on 40+ public datasets and multimodal benchmarks, our analysis identifies critical challenges in interpretability, standardization, and trustworthy AI, providing the field with authoritative evaluation standards and a comprehensive technical roadmap.
📝 Abstract
Large language models (LLMs) and emerging agentic frameworks are beginning to transform single-cell biology by enabling natural-language reasoning, generative annotation, and multimodal data integration. However, progress remains fragmented across data modalities, architectures, and evaluation standards. LLM4Cell presents the first unified survey of 58 foundation and agentic models developed for single-cell research, spanning RNA, ATAC, multi-omic, and spatial modalities. We categorize these methods into five families-foundation, text-bridge, spatial, multimodal, epigenomic, and agentic-and map them to eight key analytical tasks including annotation, trajectory and perturbation modeling, and drug-response prediction. Drawing on over 40 public datasets, we analyze benchmark suitability, data diversity, and ethical or scalability constraints, and evaluate models across 10 domain dimensions covering biological grounding, multi-omics alignment, fairness, privacy, and explainability. By linking datasets, models, and evaluation domains, LLM4Cell provides the first integrated view of language-driven single-cell intelligence and outlines open challenges in interpretability, standardization, and trustworthy model development.