🤖 AI Summary
Genomic perturbation experiments (e.g., CRISPR PerturbSeq) are costly and time-consuming, while existing graph neural network (GNN)-based expression prediction models rely on active learning for data selection—making them susceptible to initialization bias and resulting in inefficient early sampling and resource waste. To address this, we propose a non-iterative, initialization-agnostic one-shot data selection method. Grounded in graph generalization theory, our approach formulates a submodular optimization criterion to predefine optimal perturbation combinations *prior* to experimental execution. It jointly incorporates graph-structured biological priors and expression modeling objectives, thereby significantly reducing early trial-and-error overhead. Experiments on PerturbSeq datasets demonstrate that our method achieves prediction accuracy comparable to state-of-the-art active learning approaches within limited experimental rounds, while reducing initial sampling error by over 30%.
📝 Abstract
Genomic studies, including CRISPR-based PerturbSeq analyses, face a vast hypothesis space, while gene perturbations remain costly and time-consuming. Gene expression models based on graph neural networks are trained to predict the outcomes of gene perturbations to facilitate such experiments. Active learning methods are often employed to train these models due to the cost of the genomic experiments required to build the training set. However, poor model initialization in active learning can result in suboptimal early selections, wasting time and valuable resources. While typical active learning mitigates this issue over many iterations, the limited number of experimental cycles in genomic studies exacerbates the risk. To this end, we propose graph-based one-shot data selection methods for training gene expression models. Unlike active learning, one-shot data selection predefines the gene perturbations before training, hence removing the initialization bias. The data selection is motivated by theoretical studies of graph neural network generalization. The criteria are defined over the input graph and are optimized with submodular maximization. We compare them empirically to baselines and active learning methods that are state-of-the-art on this problem. The results demonstrate that graph-based one-shot data selection achieves comparable accuracy while alleviating the aforementioned risks.