Amplified Patch-Level Differential Privacy for Free via Random Cropping

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of enhancing differential privacy (DP) guarantees for local sensitive content—such as faces or license plates—in vision tasks without altering model architectures or training procedures. It introduces random cropping as a novel privacy amplification mechanism, formalizing patch-level adjacency in images and integrating it with mini-batch sampling in DP-SGD to effectively reduce the per-sample privacy loss. By leveraging this approach, the method achieves tighter privacy bounds through a lower effective sampling rate, while incurring no additional computational overhead or architectural modifications. Empirical evaluations across multiple segmentation models and datasets demonstrate a significantly improved privacy-utility trade-off, offering stronger DP guarantees for localized sensitive regions in visual data.

Technology Category

Application Category

📝 Abstract
Random cropping is one of the most common data augmentation techniques in computer vision, yet the role of its inherent randomness in training differentially private machine learning models has thus far gone unexplored. We observe that when sensitive content in an image is spatially localized, such as a face or license plate, random cropping can probabilistically exclude that content from the model's input. This introduces a third source of stochasticity in differentially private training with stochastic gradient descent, in addition to gradient noise and minibatch sampling. This additional randomness amplifies differential privacy without requiring changes to model architecture or training procedure. We formalize this effect by introducing a patch-level neighboring relation for vision data and deriving tight privacy bounds for differentially private stochastic gradient descent (DP-SGD) when combined with random cropping. Our analysis quantifies the patch inclusion probability and shows how it composes with minibatch sampling to yield a lower effective sampling rate. Empirically, we validate that patch-level amplification improves the privacy-utility trade-off across multiple segmentation architectures and datasets. Our results demonstrate that aligning privacy accounting with domain structure and additional existing sources of randomness can yield stronger guarantees at no additional cost.
Problem

Research questions and friction points this paper is trying to address.

differential privacy
random cropping
patch-level privacy
privacy amplification
computer vision
Innovation

Methods, ideas, or system contributions that make the work stand out.

random cropping
differential privacy
patch-level privacy
DP-SGD
privacy amplification
🔎 Similar Papers
No similar papers found.