🤖 AI Summary
To address the weak coupling and poor interpretability between gene subset selection and prediction tasks in single-cell RNA-seq data, this paper proposes the first end-to-end differentiable framework that jointly optimizes sparse, discrete gene selection and multi-task prediction. Methodologically, it integrates Gumbel-Softmax-based implicit discrete sampling, a differentiable subset selection mechanism, and multi-task shared representation learning, augmented by a closed-loop feedback refinement under sparsity constraints. Key contributions include: (1) simultaneous gene screening and model training in a single pass; (2) discovery of biologically generalizable biomarkers across tasks; and (3) superior predictive accuracy coupled with strong biological consistency. Evaluated on two scRNA-seq benchmarks, our method significantly outperforms state-of-the-art approaches: the selected gene subsets are 37% more compact on average, exhibit enhanced interpretability, and demonstrate superior cross-task transferability.
📝 Abstract
Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.