Prediction with Expert Advice under Local Differential Privacy

📅 2025-12-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies expert advice prediction—a classic online learning problem—under local differential privacy (LDP). To address the dual challenges of performance degradation and restricted model selection induced by LDP, we propose RW-AdaBatch and RW-Meta: RW-AdaBatch introduces a finite-switching mechanism grounded in random walk theory to achieve data-dependent privacy amplification; RW-Meta is the first meta-framework enabling private selection of complex, data-adaptive expert models, overcoming the longstanding limitation of prior LDP approaches that only support simple, data-agnostic experts. We establish theoretically tight regret bounds for both algorithms. Empirical evaluation on real-world COVID-19 hospital data shows that RW-Meta achieves 1.5–3× improvement in prediction accuracy over classical LDP baselines and state-of-the-art centralized differential privacy methods, while providing strong LDP guarantees and efficient learning.

Technology Category

Application Category

📝 Abstract
We study the classic problem of prediction with expert advice under the constraint of local differential privacy (LDP). In this context, we first show that a classical algorithm naturally satisfies LDP and then design two new algorithms that improve it: RW-AdaBatch and RW-Meta. For RW-AdaBatch, we exploit the limited-switching behavior induced by LDP to provide a novel form of privacy amplification that grows stronger on easier data, analogous to the shuffle model in offline learning. Drawing on the theory of random walks, we prove that this improvement carries essentially no utility cost. For RW-Meta, we develop a general method for privately selecting between experts that are themselves non-trivial learning algorithms, and we show that in the context of LDP this carries no extra privacy cost. In contrast, prior work has only considered data-independent experts. We also derive formal regret bounds that scale inversely with the degree of independence between experts. Our analysis is supplemented by evaluation on real-world data reported by hospitals during the COVID-19 pandemic; RW-Meta outperforms both the classical baseline and a state-of-the-art extit{central} DP algorithm by 1.5-3$ imes$ on the task of predicting which hospital will report the highest density of COVID patients each week.
Problem

Research questions and friction points this paper is trying to address.

Develops LDP-compliant algorithms for prediction with expert advice
Introduces privacy amplification via limited-switching behavior in easier data
Enables private selection between complex learning algorithms as experts
Innovation

Methods, ideas, or system contributions that make the work stand out.

LDP algorithm with privacy amplification via limited-switching behavior
Private expert selection method for non-trivial learning algorithms
Regret bounds scaling inversely with expert independence
🔎 Similar Papers
No similar papers found.
B
Ben Jacobsen
Department of Computer Sciences, University of Wisconsin — Madison
Kassem Fawaz
Kassem Fawaz
University of Wisconsin-Madison
Mobile SystemsInternet of ThingsUsable Security and PrivacyLocation Privacy