PRIVATEEDIT: A Privacy-Preserving Pipeline for Face-Centric Generative Image Editing

πŸ“… 2026-03-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the privacy and authorization risks associated with uploading users’ facial biometric data to third-party generative models for image editing. We propose an on-device privacy-preserving solution that leverages local image segmentation and dynamic facial masking to decouple identity-sensitive regions from editable content. Without modifying or retraining the third-party model, our approach enables user-controlled, high-fidelity editing. The system employs a privacy-by-design architecture and an interactive anonymization interface, allowing users to adjust the trade-off between privacy and output quality based on their trust level, while remaining compatible with commercial APIs. We validate the effectiveness of our method in professional avatar generation scenarios, demonstrating strong privacy guarantees without compromising edit quality. The implementation is open-sourced to advance privacy-first practices in generative AI.

Technology Category

Application Category

πŸ“ Abstract
Recent advances in generative image editing have enabled transformative applications, from professional head shot generation to avatar stylization. However, these systems often require uploading high-fidelity facial images to third-party models, raising concerns around biometric privacy, data misuse, and user consent. We propose a privacy-preserving pipeline that supports high-quality editing while keeping users in control over their biometric data in face-centric use cases. Our approach separates identity-sensitive regions from editable image context using on-device segmentation and masking, enabling secure, user-controlled editing without modifying third-party generative models. Unlike traditional cloud-based tools, PRIVATEEDIT enforces privacy by default: biometric data is never exposed or transmitted. This design requires no access to or retraining of third-party models, making it compatible with a wide range of commercial APIs. By treating privacy as a core design constraint, our system supports responsible generative AI centered on user autonomy and trust. The pipeline includes a tunable masking mechanism that lets users control how much facial information is concealed, allowing them to balance privacy and output fidelity based on trust level or use case. We demonstrate its applicability in professional and creative workflows and provide a user interface for selective anonymization. By advocating privacy-by-design in generative AI, our work offers both technical feasibility and normative guidance for protecting digital identity. The source code is available at https://github.com/Dipeshtamboli/PrivateEdit-Privacy-Preserving-GenAI.
Problem

Research questions and friction points this paper is trying to address.

privacy-preserving
generative image editing
biometric privacy
face-centric
user consent
Innovation

Methods, ideas, or system contributions that make the work stand out.

privacy-preserving
generative image editing
on-device segmentation
biometric privacy
masking mechanism
πŸ”Ž Similar Papers
No similar papers found.
D
Dipesh Tamboli
Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, 47907, Indiana, USA
Vineet Punyamoorty
Vineet Punyamoorty
PhD student in Machine Learning at Purdue University
Machine LearningComputer VisionMultimodal Learning
A
Atharv Pawar
Electrical and Computer Engineering, University of Michigan, Ann Arbor, 48109, Michigan, USA
Vaneet Aggarwal
Vaneet Aggarwal
Professor and University Faculty Scholar, Purdue University
Machine LearningReinforcement LearningQuantum ComputingNetworking