Defending Diffusion Models Against Membership Inference Attacks via Higher-Order Langevin Dynamics

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models remain vulnerable to membership inference attacks (MIAs). To address this, we propose the first privacy-preserving framework incorporating critically damped higher-order Langevin dynamics. Our method introduces an auxiliary-variable–augmented joint diffusion process: controlled external randomness is injected during the early forward diffusion stages to obfuscate traces of training data, thereby degrading attackers’ ability to determine sample membership without significantly compromising generation quality. Experiments on toy and speech datasets demonstrate that our approach reduces MIA success rates—evidenced by over a 20% decrease in AUROC—while maintaining stable Fréchet Inception Distance (FID). The core contributions are twofold: (i) the first application of higher-order Langevin dynamics to privacy protection in generative modeling, and (ii) a theoretically grounded stochastic mixing mechanism that jointly optimizes robustness against MIAs and generation fidelity.

Technology Category

Application Category

📝 Abstract
Recent advances in generative artificial intelligence applications have raised new data security concerns. This paper focuses on defending diffusion models against membership inference attacks. This type of attack occurs when the attacker can determine if a certain data point was used to train the model. Although diffusion models are intrinsically more resistant to membership inference attacks than other generative models, they are still susceptible. The defense proposed here utilizes critically-damped higher-order Langevin dynamics, which introduces several auxiliary variables and a joint diffusion process along these variables. The idea is that the presence of auxiliary variables mixes external randomness that helps to corrupt sensitive input data earlier on in the diffusion process. This concept is theoretically investigated and validated on a toy dataset and a speech dataset using the Area Under the Receiver Operating Characteristic (AUROC) curves and the FID metric.
Problem

Research questions and friction points this paper is trying to address.

Defending diffusion models against membership inference attacks
Proposing higher-order Langevin dynamics for enhanced data security
Protecting training data privacy through auxiliary variable integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Higher-order Langevin dynamics defense
Auxiliary variables corrupt sensitive data
Joint diffusion process enhances security
🔎 Similar Papers
No similar papers found.
B
Benjamin Sterling
Department of Applied Math & Statistics, Stony Brook University, Stony Brook, NY , USA
Yousef El-Laham
Yousef El-Laham
J.P. Morgan Chase - AI Research
uncertainty quantificationMonte Carlo methodsgenerative modelscomputational statistics
M
Mónica F. Bugallo
Department of Electrical and Computer Engineering, Stony Brook University, Stony Brook, NY , USA