Locally Optimal Private Sampling: Beyond the Global Minimax

๐Ÿ“… 2025-10-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses the problem of generating a single sample from a private distribution $P$ under local differential privacy (LDP), such that the induced output distribution approximates $P$ in any $f$-divergence. To this end, we propose a novel sampling framework centered on the **local minimax risk**, proving its equivalence to the global minimax risk over a constrained distribution classโ€”thereby yielding a closed-form optimal sampler independent of the specific $f$-divergence. We further extend the framework to functional LDP, enabling incorporation of public data for enhanced modeling. Experiments demonstrate that our method consistently outperforms existing globally optimal samplers across diverse privacy budgets and distribution families. Our work establishes a new paradigm for high-fidelity synthetic data generation under stringent local privacy constraints.

Technology Category

Application Category

๐Ÿ“ Abstract
We study the problem of sampling from a distribution under local differential privacy (LDP). Given a private distribution $P in mathcal{P}$, the goal is to generate a single sample from a distribution that remains close to $P$ in $f$-divergence while satisfying the constraints of LDP. This task captures the fundamental challenge of producing realistic-looking data under strong privacy guarantees. While prior work by Park et al. (NeurIPS'24) focuses on global minimax-optimality across a class of distributions, we take a local perspective. Specifically, we examine the minimax risk in a neighborhood around a fixed distribution $P_0$, and characterize its exact value, which depends on both $P_0$ and the privacy level. Our main result shows that the local minimax risk is determined by the global minimax risk when the distribution class $mathcal{P}$ is restricted to a neighborhood around $P_0$. To establish this, we (1) extend previous work from pure LDP to the more general functional LDP framework, and (2) prove that the globally optimal functional LDP sampler yields the optimal local sampler when constrained to distributions near $P_0$. Building on this, we also derive a simple closed-form expression for the locally minimax-optimal samplers which does not depend on the choice of $f$-divergence. We further argue that this local framework naturally models private sampling with public data, where the public data distribution is represented by $P_0$. In this setting, we empirically compare our locally optimal sampler to existing global methods, and demonstrate that it consistently outperforms global minimax samplers.
Problem

Research questions and friction points this paper is trying to address.

Sampling from distributions under local differential privacy constraints
Characterizing minimax risk around fixed distributions for optimal sampling
Developing locally optimal samplers that outperform global minimax methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends pure LDP to functional LDP framework
Derives closed-form locally minimax-optimal samplers
Optimizes sampling near fixed distribution using public data
๐Ÿ”Ž Similar Papers
No similar papers found.
H
Hrad Ghoukasian
Department of Computing and Software, McMaster University
B
Bonwoo Lee
Department of Mathematical Sciences, Korea Advanced Institute of Science & Technology
Shahab Asoodeh
Shahab Asoodeh
McMaster University
Information Theory and StatisticsDifferential PrivacyMachine Learning