Banded Square Root Matrix Factorization for Differentially Private Model Training

📅 2024-05-22
🏛️ Neural Information Processing Systems
📈 Citations: 6
Influential: 1
📄 PDF
🤖 AI Summary
To address the high computational overhead of matrix decomposition-based methods in differentially private model training—particularly the need for numerical solving of pre-optimization problems—this paper proposes Banded Square Root (BSR) matrix decomposition. BSR is the first method to exploit the structural properties of the standard matrix square root to construct a banded approximation, enabling zero-overhead initialization without any pre-optimization. It further yields closed-form analytical solutions for momentum SGD and weight decay. Theoretically, BSR provides guaranteed approximation accuracy in both centralized and federated learning settings under differential privacy. Empirical results demonstrate that BSR achieves model accuracy comparable to optimal decompositions while completely eliminating the traditional numerical optimization step, thereby significantly improving training efficiency—especially at scale.

Technology Category

Application Category

📝 Abstract
Current state-of-the-art methods for differentially private model training are based on matrix factorization techniques. However, these methods suffer from high computational overhead because they require numerically solving a demanding optimization problem to determine an approximately optimal factorization prior to the actual model training. In this work, we present a new matrix factorization approach, BSR, which overcomes this computational bottleneck. By exploiting properties of the standard matrix square root, BSR allows to efficiently handle also large-scale problems. For the key scenario of stochastic gradient descent with momentum and weight decay, we even derive analytical expressions for BSR that render the computational overhead negligible. We prove bounds on the approximation quality that hold both in the centralized and in the federated learning setting. Our numerical experiments demonstrate that models trained using BSR perform on par with the best existing methods, while completely avoiding their computational overhead.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational overhead in private model training
Efficient large-scale matrix factorization for privacy
Analytical solutions for SGD with momentum optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

BSR matrix factorization reduces computational overhead
Analytical expressions for efficient large-scale problems
Proven bounds for centralized and federated learning