A Standardized Framework For Evaluating Gene Expression Generative Models

πŸ“… 2026-03-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current generative models for single-cell gene expression lack a unified, biologically meaningful evaluation framework, leading to incomparable results and poor reproducibility. To address this, this work proposes GGE, an open-source Python framework that introduces the first biology-oriented standardized benchmarking system. GGE integrates distributional similarity metrics, differential gene expression analysis, and perturbation effect correlation assessment within configurable computational spaces. The framework demonstrates that existing evaluation metrics are highly sensitive to implementation details, thereby undermining fair model comparison. By providing a robust and interpretable evaluation protocol, GGE significantly enhances the reliability of benchmarking generative models and establishes a trustworthy foundation for research in perturbation prediction and counterfactual inference.

Technology Category

Application Category

πŸ“ Abstract
The rapid development of generative models for single-cell gene expression data has created an urgent need for standardised evaluation frameworks. Current evaluation practices suffer from inconsistent metric implementations, incomparable hyperparameter choices, and a lack of biologically-grounded metrics. We present Generated Genetic Expression Evaluator (GGE), an open-source Python framework that addresses these challenges by providing a comprehensive suite of distributional metrics with explicit computation space options and biologically-motivated evaluation through differentially expressed gene (DEG)-focused analysis and perturbation-effect correlation, enabling standardized reporting and reproducible benchmarking. Through extensive analysis of the single-cell generative modeling literature, we identify that no standardized evaluation protocol exists. Methods report incomparable metrics computed in different spaces with different hyperparameters. We demonstrate that metric values vary substantially depending on implementation choices, highlighting the critical need for standardization. GGE enables fair comparison across generative approaches and accelerates progress in perturbation response prediction, cellular identity modeling, and counterfactual inference.
Problem

Research questions and friction points this paper is trying to address.

generative models
single-cell gene expression
evaluation framework
standardization
biologically-grounded metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

generative models
single-cell gene expression
standardized evaluation
differentially expressed genes
perturbation response
πŸ”Ž Similar Papers
No similar papers found.
A
Andrea Rubbi
Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
A
Andrea Giuseppe Di Francesco
Sapienza University of Rome, Rome, Italy
M
Mohammad Lotfollahi
Wellcome Sanger Institute, Cambridge, United Kingdom
Pietro LiΓ²
Pietro LiΓ²
Professor, University of Cambridge
AI & Comp Biology -> Medicine