Sliced Rényi Pufferfish Privacy: Directional Additive Noise Mechanism and Private Learning with Gradient Clipping

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Rényi Pufferfish Privacy (RPP) suffers from two key limitations: computational intractability of optimal transport calibration in high dimensions and the absence of general compositional rules for iterative learning. To address these, we propose Sliced Rényi Pufferfish Privacy (SRPP), a novel framework that replaces high-dimensional optimal transport with directional one-dimensional slicing comparisons. SRPP integrates gradient clipping with a History-consistent Upper Bound (HUC) mechanism to enable geometry-aware noise calibration and tractable privacy accounting. We further introduce HUC and mean-square HUC (ms-HUC) accounting methods, providing path-consistent and mean-square compositional guarantees—marking the first additive composition framework supporting multiple heterogeneous mechanisms. Experiments demonstrate that SRPP significantly improves the privacy–utility trade-off in both static and iterative learning settings, yielding more accurate privacy accounting and more stable noise calibration, thereby enhancing both model utility and privacy assurance.

Technology Category

Application Category

📝 Abstract
We study privatization mechanism design and privacy accounting in the Pufferfish family, addressing two practical gaps of Renyi Pufferfish Privacy (RPP): high-dimensional optimal transport (OT) calibration and the absence of a general, mechanism-agnostic composition rule for iterative learning. We introduce Sliced Renyi Pufferfish Privacy (SRPP), which replaces high-dimensional comparisons by directional ones over a set of unit vectors, enabling geometry-aware and tractable guarantees. To calibrate noise without high-dimensional OT, we propose sliced Wasserstein mechanisms that compute per-direction (1-D) sensitivities, yielding closed-form, statistically stable, and anisotropic calibrations. We further define SRPP Envelope (SRPE) as computable upper bounds that are tightly implementable by these sliced Wasserstein mechanisms. For iterative deep learning algorithms, we develop a decompose-then-compose SRPP-SGD scheme with gradient clipping based on a History-Uniform Cap (HUC), a pathwise bound on one-step directional changes that is uniform over optimization history, and a mean-square variant (ms-HUC) that leverages subsampling randomness to obtain on-average SRPP guarantees with improved utility. The resulting HUC and ms-HUC accountants aggregate per-iteration, per-direction Renyi costs and integrate naturally with moments-accountant style analyses. Finally, when multiple mechanisms are trained and privatized independently under a common slicing geometry, our analysis yields graceful additive composition in both worst-case and mean-square regimes. Our experiments indicate that the proposed SRPP-based methods achieve favorable privacy-utility trade-offs in both static and iterative settings.
Problem

Research questions and friction points this paper is trying to address.

Develops directional privacy framework to avoid high-dimensional optimal transport computations
Creates mechanism-agnostic composition rules for iterative deep learning algorithms
Proposes sliced Wasserstein mechanisms for tractable anisotropic noise calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sliced Renyi Pufferfish Privacy for directional guarantees
Sliced Wasserstein mechanisms for anisotropic noise calibration
Decompose-then-compose SRPP-SGD with gradient clipping
🔎 Similar Papers
No similar papers found.