FacialMotionID: Identifying Users of Mixed Reality Headsets using Abstract Facial Motion Representations

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically assesses privacy risks arising from facial motion data captured by mixed reality (MR) headsets, focusing on user re-identification and inference of sensitive attributes—particularly emotional states. We propose a cross-device privacy analysis framework based on abstracted facial motion representations, integrating multimodal behavioral signals from the face, eyes, and head, and modeling individual-specific patterns via machine learning. In a large-scale study with 116 participants, we demonstrate—for the first time—that even abstracted motion features retain strong discriminative power: achieving 98% balanced accuracy in cross-device identity re-identification and 86% accuracy in emotion state inference. Our findings reveal that fine-grained motion data, often overlooked in MR privacy considerations, exhibits high identifiability. This work provides critical empirical evidence and methodological foundations for developing MR-specific privacy standards and effective anonymization mechanisms.

Technology Category

Application Category

📝 Abstract
Facial motion capture in mixed reality headsets enables real-time avatar animation, allowing users to convey non-verbal cues during virtual interactions. However, as facial motion data constitutes a behavioral biometric, its use raises novel privacy concerns. With mixed reality systems becoming more immersive and widespread, understanding whether face motion data can lead to user identification or inference of sensitive attributes is increasingly important. To address this, we conducted a study with 116 participants using three types of headsets across three sessions, collecting facial, eye, and head motion data during verbal and non-verbal tasks. The data used is not raw video, but rather, abstract representations that are used to animate digital avatars. Our analysis shows that individuals can be re-identified from this data with up to 98% balanced accuracy, are even identifiable across device types, and that emotional states can be inferred with up to 86% accuracy. These results underscore the potential privacy risks inherent in face motion tracking in mixed reality environments.
Problem

Research questions and friction points this paper is trying to address.

Identifying users via facial motion in mixed reality headsets
Assessing privacy risks of facial motion biometric data
Evaluating cross-device re-identification from abstract motion representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Abstract facial motion for user identification
Cross-device re-identification with 98% accuracy
Emotion inference via motion data (86% accuracy)
🔎 Similar Papers
No similar papers found.