🤖 AI Summary
Conventional $(varepsilon,delta)$-differential privacy (DP) suffers from inherent limitations in adaptivity and fine-grained privacy control. Method: This paper proposes Alpha Differential Privacy (ADP), the first tunable DP framework systematically built upon $alpha$-divergence. ADP incorporates $alpha$-divergence into the privacy definition, enabling continuous, analytically tractable adjustment of privacy strength and dynamic privacy budget allocation. Contribution/Results: Theoretical analysis shows that ADP yields tighter privacy bounds than standard DP under small-to-moderate iteration counts. Empirical evaluation demonstrates ADP’s superiority—particularly under high-privacy requirements and limited iteration budgets—achieving 20%–45% stronger privacy protection across multiple benchmark tasks compared to baseline methods. Crucially, ADP establishes the first rigorous, $alpha$-divergence-based DP paradigm, unifying theoretical soundness with practical flexibility.
📝 Abstract
As data-driven technologies advance swiftly, maintaining strong privacy measures becomes progressively difficult. Conventional $(epsilon, delta)$-differential privacy, while prevalent, exhibits limited adaptability for many applications. To mitigate these constraints, we present alpha differential privacy (ADP), an innovative privacy framework grounded in alpha divergence, which provides a more flexible assessment of privacy consumption. This study delineates the theoretical underpinnings of ADP and contrasts its performance with competing privacy frameworks across many scenarios. Empirical assessments demonstrate that ADP offers enhanced privacy guarantees in small to moderate iteration contexts, particularly where severe privacy requirements are necessary. The suggested method markedly improves privacy-preserving methods, providing a flexible solution for contemporary data analysis issues in a data-centric environment.