🤖 AI Summary
This study systematically assesses privacy risks arising from facial motion data captured by mixed reality (MR) headsets, focusing on user re-identification and inference of sensitive attributes—particularly emotional states. We propose a cross-device privacy analysis framework based on abstracted facial motion representations, integrating multimodal behavioral signals from the face, eyes, and head, and modeling individual-specific patterns via machine learning. In a large-scale study with 116 participants, we demonstrate—for the first time—that even abstracted motion features retain strong discriminative power: achieving 98% balanced accuracy in cross-device identity re-identification and 86% accuracy in emotion state inference. Our findings reveal that fine-grained motion data, often overlooked in MR privacy considerations, exhibits high identifiability. This work provides critical empirical evidence and methodological foundations for developing MR-specific privacy standards and effective anonymization mechanisms.
📝 Abstract
Facial motion capture in mixed reality headsets enables real-time avatar animation, allowing users to convey non-verbal cues during virtual interactions. However, as facial motion data constitutes a behavioral biometric, its use raises novel privacy concerns. With mixed reality systems becoming more immersive and widespread, understanding whether face motion data can lead to user identification or inference of sensitive attributes is increasingly important.
To address this, we conducted a study with 116 participants using three types of headsets across three sessions, collecting facial, eye, and head motion data during verbal and non-verbal tasks. The data used is not raw video, but rather, abstract representations that are used to animate digital avatars. Our analysis shows that individuals can be re-identified from this data with up to 98% balanced accuracy, are even identifiable across device types, and that emotional states can be inferred with up to 86% accuracy. These results underscore the potential privacy risks inherent in face motion tracking in mixed reality environments.