🤖 AI Summary
To address the severe degradation in optimization performance caused by high noise injection in large-scale differentially private (DP) training, this paper proposes DiSK—the first framework to integrate simplified Kalman filtering into DP optimization. DiSK dynamically models gradient noise and state evolution, enabling iterative denoising and progressively refined gradient estimation. Theoretically, it achieves superior iteration complexity compared to DPSGD and establishes rigorous privacy–utility trade-off guarantees under standard DP definitions. Empirically, DiSK attains state-of-the-art performance on benchmarks including CIFAR-100, ImageNet-1K, and GLUE, significantly outperforming existing DP optimizers under identical privacy budgets (ε ≤ 8). Its core innovation lies in bridging control theory and DP optimization—specifically, introducing noise-aware adaptive gradient calibration for the first time. This enables principled, real-time correction of noisy gradients while preserving formal privacy guarantees.
📝 Abstract
Differential privacy (DP) offers a robust framework for safeguarding individual data privacy. To utilize DP in training modern machine learning models, differentially private optimizers have been widely used in recent years. A popular approach to privatize an optimizer is to clip the individual gradients and add sufficiently large noise to the clipped gradient. This approach led to the development of DP optimizers that have comparable performance with their non-private counterparts in fine-tuning tasks or in tasks with a small number of training parameters. However, a significant performance drop is observed when these optimizers are applied to large-scale training. This degradation stems from the substantial noise injection required to maintain DP, which disrupts the optimizer's dynamics. This paper introduces DiSK, a novel framework designed to significantly enhance the performance of DP optimizers. DiSK employs Kalman filtering, a technique drawn from control and signal processing, to effectively denoise privatized gradients and generate progressively refined gradient estimations. To ensure practicality for large-scale training, we simplify the Kalman filtering process, minimizing its memory and computational demands. We establish theoretical privacy-utility trade-off guarantees for DiSK, and demonstrate provable improvements over standard DP optimizers like DPSGD in terms of iteration complexity upper-bound. Extensive experiments across diverse tasks, including vision tasks such as CIFAR-100 and ImageNet-1k and language fine-tuning tasks such as GLUE, E2E, and DART, validate the effectiveness of DiSK. The results showcase its ability to significantly improve the performance of DP optimizers, surpassing state-of-the-art results under the same privacy constraints on several benchmarks.