π€ AI Summary
Current generative models for single-cell gene expression lack a unified, biologically meaningful evaluation framework, leading to incomparable results and poor reproducibility. To address this, this work proposes GGE, an open-source Python framework that introduces the first biology-oriented standardized benchmarking system. GGE integrates distributional similarity metrics, differential gene expression analysis, and perturbation effect correlation assessment within configurable computational spaces. The framework demonstrates that existing evaluation metrics are highly sensitive to implementation details, thereby undermining fair model comparison. By providing a robust and interpretable evaluation protocol, GGE significantly enhances the reliability of benchmarking generative models and establishes a trustworthy foundation for research in perturbation prediction and counterfactual inference.
π Abstract
The rapid development of generative models for single-cell gene expression data has created an urgent need for standardised evaluation frameworks. Current evaluation practices suffer from inconsistent metric implementations, incomparable hyperparameter choices, and a lack of biologically-grounded metrics. We present Generated Genetic Expression Evaluator (GGE), an open-source Python framework that addresses these challenges by providing a comprehensive suite of distributional metrics with explicit computation space options and biologically-motivated evaluation through differentially expressed gene (DEG)-focused analysis and perturbation-effect correlation, enabling standardized reporting and reproducible benchmarking. Through extensive analysis of the single-cell generative modeling literature, we identify that no standardized evaluation protocol exists. Methods report incomparable metrics computed in different spaces with different hyperparameters. We demonstrate that metric values vary substantially depending on implementation choices, highlighting the critical need for standardization. GGE enables fair comparison across generative approaches and accelerates progress in perturbation response prediction, cellular identity modeling, and counterfactual inference.