🤖 AI Summary
To address poor interpretability and limited generalizability in few-shot, high-dimensional omics classification, this study proposes an interpretable classification framework integrating feature selection with synthetic data generation. Methodologically, it innovatively combines bootstrap-based feature screening, hybrid data augmentation using SMOTE and TABGAN, and an ensemble of binary classifiers, rigorously evaluated via stratified cross-validation. We systematically uncover, for the first time, the synergistic mechanism by which joint feature selection and synthetic data generation jointly enhance both interpretability and generalizability—even under extreme data scarcity (e.g., *n* < 30 per class). The framework demonstrates robust performance across six binary classification tasks in the EMTAB-8026 benchmark and maintains consistent accuracy when transferred to larger independent test sets, with no statistically significant degradation. This work establishes a novel paradigm for few-shot omics modeling that simultaneously ensures transparency, reliability, and practical utility.
📝 Abstract
Given the increasing complexity of omics datasets, a key challenge is not only improving classification performance but also enhancing the transparency and reliability of model decisions. Effective model performance and feature selection are fundamental for explainability and reliability. In many cases, high dimensional omics datasets suffer from limited number of samples due to clinical constraints, patient conditions, phenotypes rarity and others conditions. Current omics based classification models often suffer from narrow interpretability, making it difficult to discern meaningful insights where trust and reproducibility are critical. This study presents a machine learning based classification framework that integrates feature selection with data augmentation techniques to achieve high standard classification accuracy while ensuring better interpretability. Using the publicly available dataset (E MTAB 8026), we explore a bootstrap analysis in six binary classification scenarios to evaluate the proposed model's behaviour. We show that the proposed pipeline yields cross validated perfomance on small dataset that is conserved when the trained classifier is applied to a larger test set. Our findings emphasize the fundamental balance between accuracy and feature selection, highlighting the positive effect of introducing synthetic data for better generalization, even in scenarios with very limited samples availability.