Quantifying Memorization and Privacy Risks in Genomic Language Models

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Genomic language models may memorize sensitive individual DNA sequences during training, posing significant privacy and compliance risks. This work proposes the first unified, multi-vector framework for assessing memorization risk in genomic models, integrating perplexity analysis, controlled canary sequence insertion, and membership inference attacks to systematically quantify memorization behavior. Experimental results demonstrate that genomic language models commonly exhibit substantial memorization, with risk levels strongly influenced by model architecture and training strategies. The findings further reveal that no single evaluation method suffices to fully characterize privacy risks, thereby underscoring the necessity and effectiveness of the proposed integrative framework.

Technology Category

Application Category

📝 Abstract
Genomic language models (GLMs) have emerged as powerful tools for learning representations of DNA sequences, enabling advances in variant prediction, regulatory element identification, and cross-task transfer learning. However, as these models are increasingly trained or fine-tuned on sensitive genomic cohorts, they risk memorizing specific sequences from their training data, raising serious concerns around privacy, data leakage, and regulatory compliance. Despite growing awareness of memorization risks in general-purpose language models, little systematic evaluation exists for these risks in the genomic domain, where data exhibit unique properties such as a fixed nucleotide alphabet, strong biological structure, and individual identifiability. We present a comprehensive, multi-vector privacy evaluation framework designed to quantify memorization risks in GLMs. Our approach integrates three complementary risk assessment methodologies: perplexity-based detection, canary sequence extraction, and membership inference. These are combined into a unified evaluation pipeline that produces a worst-case memorization risk score. To enable controlled evaluation, we plant canary sequences at varying repetition rates into both synthetic and real genomic datasets, allowing precise quantification of how repetition and training dynamics influence memorization. We evaluate our framework across multiple GLM architectures, examining the relationship between sequence repetition, model capacity, and memorization risk. Our results establish that GLMs exhibit measurable memorization and that the degree of memorization varies across architectures and training regimes. These findings reveal that no single attack vector captures the full scope of memorization risk, underscoring the need for multi-vector privacy auditing as a standard practice for genomic AI systems.
Problem

Research questions and friction points this paper is trying to address.

memorization
privacy risks
genomic language models
data leakage
individual identifiability
Innovation

Methods, ideas, or system contributions that make the work stand out.

genomic language models
memorization risk
privacy auditing
canary sequences
membership inference
🔎 Similar Papers
No similar papers found.