๐ค AI Summary
To address privacy risks arising from the inadvertent leakage of sensitive attributes (e.g., age, gender, race) in deep face embeddings, this paper proposes a multi-layer privacy-preserving framework. Methodologically, it introduces a novel two-stage protection mechanism that synergistically integrates fully homomorphic encryption (FHE) with irreversible feature-manifold hashing, ensuring strong privacy guarantees while preserving computational feasibility. Embedding compression is further incorporated to substantially reduce FHE overhead, and the entire pipeline is end-to-end optimized using ResNet-50 or IR-50 encoders. Experiments on two mainstream benchmarks demonstrate that prediction accuracy for sensitive attributes drops by over 60%, while identity verification accuracy remains above 98%โsignificantly outperforming existing approaches. This work is the first to jointly leverage FHE and manifold hashing for embedding-level privacy protection, achieving an unprecedented balance among security, practicality, and recognition performance.
๐ Abstract
In today's data-driven analytics landscape, deep learning has become a powerful tool, with latent representations, known as embeddings, playing a central role in several applications. In the face analytics domain, such embeddings are commonly used for biometric recognition (e.g., face identification). However, these embeddings, or templates, can inadvertently expose sensitive attributes such as age, gender, and ethnicity. Leaking such information can compromise personal privacy and affect civil liberty and human rights. To address these concerns, we introduce a multi-layer protection framework for embeddings. It consists of a sequence of operations: (a) encrypting embeddings using Fully Homomorphic Encryption (FHE), and (b) hashing them using irreversible feature manifold hashing. Unlike conventional encryption methods, FHE enables computations directly on encrypted data, allowing downstream analytics while maintaining strong privacy guarantees. To reduce the overhead of encrypted processing, we employ embedding compression. Our proposed method shields latent representations of sensitive data from leaking private attributes (such as age and gender) while retaining essential functional capabilities (such as face identification). Extensive experiments on two datasets using two face encoders demonstrate that our approach outperforms several state-of-the-art privacy protection methods.