SafeGenes: Evaluating the Adversarial Robustness of Genomic Foundation Models

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically evaluates the adversarial robustness of genomic foundation models (GFMs) on high-stakes tasks such as variant effect prediction—a critical gap in current assessment frameworks. To address this, we introduce the first dedicated adversarial safety evaluation framework for GFMs, proposing a dual-path evaluation paradigm spanning both input and embedding spaces: (i) input-space perturbations via FGSM applied to DNA sequences, and (ii) embedding-space manipulation via learnable soft prompts. Empirical evaluation on state-of-the-art models—including ESM1b and ESM1v—demonstrates that both attack strategies significantly degrade prediction accuracy, exposing severe adversarial vulnerabilities in contemporary GFMs. Our study establishes a new benchmark for genomic AI safety assessment and provides foundational empirical evidence and methodological guidance for enhancing model robustness and enabling trustworthy deployment in clinical and biomedical applications.

Technology Category

Application Category

📝 Abstract
Genomic Foundation Models (GFMs), such as Evolutionary Scale Modeling (ESM), have demonstrated significant success in variant effect prediction. However, their adversarial robustness remains largely unexplored. To address this gap, we propose SafeGenes: a framework for Secure analysis of genomic foundation models, leveraging adversarial attacks to evaluate robustness against both engineered near-identical adversarial Genes and embedding-space manipulations. In this study, we assess the adversarial vulnerabilities of GFMs using two approaches: the Fast Gradient Sign Method (FGSM) and a soft prompt attack. FGSM introduces minimal perturbations to input sequences, while the soft prompt attack optimizes continuous embeddings to manipulate model predictions without modifying the input tokens. By combining these techniques, SafeGenes provides a comprehensive assessment of GFM susceptibility to adversarial manipulation. Targeted soft prompt attacks led to substantial performance degradation, even in large models such as ESM1b and ESM1v. These findings expose critical vulnerabilities in current foundation models, opening new research directions toward improving their security and robustness in high-stakes genomic applications such as variant effect prediction.
Problem

Research questions and friction points this paper is trying to address.

Evaluating adversarial robustness of genomic foundation models
Assessing vulnerabilities to engineered adversarial genes and embeddings
Exposing critical security flaws in variant effect prediction models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging adversarial attacks for robustness evaluation
Using FGSM for minimal input sequence perturbations
Applying soft prompt attacks to manipulate embeddings
🔎 Similar Papers
No similar papers found.