🤖 AI Summary
This work addresses three sequential decision-making problems—multi-armed bandits, linear contextual bandits, and reinforcement learning from human feedback (RLHF)—by introducing a novel differentially private (DP) paradigm that requires no explicit noise injection. The core method leverages the inherent smoothing effect of KL-divergence regularization in policy optimization, and provides the first rigorous theoretical proof that appropriately scaled KL regularization naturally satisfies ε-differential privacy, with the privacy budget ε precisely controllable via the regularization coefficient. This “free privacy” mechanism circumvents the performance degradation typically induced by conventional DP approaches reliant on additive noise, preserving both policy convergence guarantees and practical utility while ensuring per-sample privacy. Empirical evaluation on offline decision-making tasks validates its effectiveness.
📝 Abstract
Differential Privacy (DP) provides a rigorous framework for privacy, ensuring the outputs of data-driven algorithms remain statistically indistinguishable across datasets that differ in a single entry. While guaranteeing DP generally requires explicitly injecting noise either to the algorithm itself or to its outputs, the intrinsic randomness of existing algorithms presents an opportunity to achieve DP ``for free''. In this work, we explore the role of regularization in achieving DP across three different decision-making problems: multi-armed bandits, linear contextual bandits, and reinforcement learning from human feedback (RLHF), in offline data settings. We show that adding KL-regularization to the learning objective (a common approach in optimization algorithms) makes the action sampled from the resulting stochastic policy itself differentially private. This offers a new route to privacy guarantees without additional noise injection, while also preserving the inherent advantage of regularization in enhancing performance.