Privacy Amplification in Differentially Private Zeroth-Order Optimization with Hidden States

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of zeroth-order fine-tuning of large language models under differential privacy (DP) constraints and memory limitations. Specifically, it tackles the open problem of privacy amplification in hidden-state zeroth-order optimization. Method: We extend the “privacy amplification-by-iteration” framework to zeroth-order optimization by integrating stochastic gradient estimation with hidden-state privacy tracking. Under smooth loss functions, we provide a rigorous convergence analysis and derive tight DP bounds. Contribution/Results: This is the first DP theoretical bound grounded in hidden-state convergence guarantees, significantly improving privacy budget utilization. Based on this analysis, we propose a novel zeroth-order algorithm that achieves superior optimization performance under the same privacy budget. Our work establishes the first formal theoretical foundation for differentially private zeroth-order optimization and provides a practical algorithmic tool for privacy-preserving LLM adaptation under memory constraints.

Technology Category

Application Category

📝 Abstract
Zeroth-order optimization has emerged as a promising approach for fine-tuning large language models on domain-specific data, particularly under differential privacy (DP) and memory constraints. While first-order methods have been extensively studied from a privacy perspective, the privacy analysis and algorithmic design for zeroth-order methods remain significantly underexplored. A critical open question concerns hidden-state DP analysis: although convergent privacy bounds are known for first-order methods, it has remained unclear whether similar guarantees can be established for zeroth-order methods. In this work, we provide an affirmative answer by proving a convergent DP bound for zeroth-order optimization. Our analysis generalizes the celebrated privacy amplification-by-iteration framework to the setting of smooth loss functions in zeroth-order optimization. Furthermore, it induces better DP zeroth-order algorithmic designs that are previously unknown to the literature.
Problem

Research questions and friction points this paper is trying to address.

Analyzing hidden-state DP in zeroth-order optimization
Establishing privacy bounds for zeroth-order methods
Designing better DP zeroth-order algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proves convergent DP bound for zeroth-order optimization
Generalizes privacy amplification-by-iteration framework
Introduces better DP zeroth-order algorithmic designs
🔎 Similar Papers
No similar papers found.