🤖 AI Summary
Traditional face recognition relies on opaque, fixed templates, posing fairness and privacy risks. This paper proposes MOTE, a method that replaces static templates with lightweight, personalized binary-classification neural networks, enabling one-shot enrollment and individual-level fairness control. Its core contributions are threefold: (1) it is the first to embed learnable, interpretable personalized networks directly into the authentication pipeline; (2) it ensures model-level privacy and interpretability via synthetic-data-based balanced sampling, one-shot meta-learning, and architectural lightweighting; and (3) it achieves significant improvements across multiple benchmarks—reducing statistical parity difference by 37% (enhancing fairness) and lowering membership inference attack success rate by 62% (strengthening privacy). MOTE is particularly suited for ethically sensitive, small-to-medium-scale deployments where transparency, fairness, and data minimization are critical.
📝 Abstract
Traditional face recognition systems rely on extracting fixed face representations, known as templates, to store and verify identities. These representations are typically generated by neural networks that often lack explainability and raise concerns regarding fairness and privacy. In this work, we propose a novel model-template (MOTE) approach that replaces vector-based face templates with small personalized neural networks. This design enables more responsible face recognition for small and medium-scale systems. During enrollment, MOTE creates a dedicated binary classifier for each identity, trained to determine whether an input face matches the enrolled identity. Each classifier is trained using only a single reference sample, along with synthetically balanced samples to allow adjusting fairness at the level of a single individual during enrollment. Extensive experiments across multiple datasets and recognition systems demonstrate substantial improvements in fairness and particularly in privacy. Although the method increases inference time and storage requirements, it presents a strong solution for small- and mid-scale applications where fairness and privacy are critical.