🤖 AI Summary
Conventional differentially private (DP) deep learning relies on backpropagation, necessitating gradient clipping and external noise injection—introducing computational overhead and optimization instability.
Method: We propose DP-ULR, the first forward-learning algorithm natively integrating DP into the forward pass. It employs a theoretically grounded rejection-sampling batch processing mechanism and dynamic noise calibration governed by the privacy budget ε, eliminating the need for gradient clipping or backward-pass noise injection.
Contribution/Results: DP-ULR establishes the novel paradigm of “privacy-aware forward learning,” unifying likelihood-ratio estimation and stochastic perturbation modeling. On standard image classification benchmarks, it achieves accuracy comparable to DP-SGD under ε ≤ 8, with a gap of less than 1.2%, thereby providing the first empirical validation that forward learning is inherently compatible with differential privacy.
📝 Abstract
Differential privacy (DP) in deep learning is a critical concern as it ensures the confidentiality of training data while maintaining model utility. Existing DP training algorithms provide privacy guarantees by clipping and then injecting external noise into sample gradients computed by the backpropagation algorithm. Different from backpropagation, forward-learning algorithms based on perturbation inherently add noise during the forward pass and utilize randomness to estimate the gradients. Although these algorithms are non-privatized, the introduction of noise during the forward pass indirectly provides internal randomness protection to the model parameters and their gradients, suggesting the potential for naturally providing differential privacy. In this paper, we propose a lue{privatized} forward-learning algorithm, Differential Private Unified Likelihood Ratio (DP-ULR), and demonstrate its differential privacy guarantees. DP-ULR features a novel batch sampling operation with rejection, of which we provide theoretical analysis in conjunction with classic differential privacy mechanisms. DP-ULR is also underpinned by a theoretically guided privacy controller that dynamically adjusts noise levels to manage privacy costs in each training step. Our experiments indicate that DP-ULR achieves competitive performance compared to traditional differential privacy training algorithms based on backpropagation, maintaining nearly the same privacy loss limits.