🤖 AI Summary
SNP data are vulnerable to privacy attacks—such as membership inference and kinship detection—due to strong inter-locus correlations. To address this, we propose the first sequence-level differentially private synthetic data generation framework based on a time-inhomogeneous hidden Markov model (TIHMM). Our method captures long-range dependencies among SNPs via position-dependent transition probabilities and applies gradient clipping and differential privacy directly during training on raw sequences, eliminating the need for public data or post-processing. Evaluated on the 1000 Genomes dataset under strict privacy budgets (ε ∈ [1, 10], δ = 10⁻⁴), our synthetic data precisely preserve key population-genetic statistics—including allele frequencies, linkage disequilibrium patterns, and population structure—outperforming existing synthetic methods. This demonstrates unprecedented utility–privacy trade-off balance, enabling secure, high-fidelity genomic data sharing.
📝 Abstract
Single nucleotide polymorphism (SNP) datasets are fundamental to genetic studies but pose significant privacy risks when shared. The correlation of SNPs with each other makes strong adversarial attacks such as masked-value reconstruction, kin, and membership inference attacks possible. Existing privacy-preserving approaches either apply differential privacy to statistical summaries of these datasets or offer complex methods that require post-processing and the usage of a publicly available dataset to suppress or selectively share SNPs.
In this study, we introduce an innovative framework for generating synthetic SNP sequence datasets using samples derived from time-inhomogeneous hidden Markov models (TIHMMs). To preserve the privacy of the training data, we ensure that each SNP sequence contributes only a bounded influence during training, enabling strong differential privacy guarantees. Crucially, by operating on full SNP sequences and bounding their gradient contributions, our method directly addresses the privacy risks introduced by their inherent correlations.
Through experiments conducted on the real-world 1000 Genomes dataset, we demonstrate the efficacy of our method using privacy budgets of $varepsilon in [1, 10]$ at $δ=10^{-4}$. Notably, by allowing the transition models of the HMM to be dependent on the location in the sequence, we significantly enhance performance, enabling the synthetic datasets to closely replicate the statistical properties of non-private datasets. This framework facilitates the private sharing of genomic data while offering researchers exceptional flexibility and utility.