🤖 AI Summary
This work addresses the privacy risks inherent in video understanding models, which, while achieving high action recognition performance, often inadvertently leak sensitive attributes such as identity, gender, and race. To mitigate this, the authors propose a spatiotemporal anonymization framework based on Vision Transformers that introduces dual classification tokens—dedicated to action and privacy—within a unified architecture. By contrasting the attention distributions of these tokens, the method establishes a utility-privacy scoring mechanism to identify and prune low-scoring spatiotemporal tubelets. This approach effectively disentangles utility-related and privacy-sensitive features, leveraging attention discrepancies to guide anonymization. Extensive experiments demonstrate that the framework preserves action recognition accuracy comparable to that of the original videos while significantly reducing the leakage of sensitive attributes across multiple benchmarks.
📝 Abstract
Recent advances in large-scale video models have significantly improved video understanding across domains such as surveillance, healthcare, and entertainment. However, these models also amplify privacy risks by encoding sensitive attributes, including facial identity, race, and gender. While image anonymization has been extensively studied, video anonymization remains relatively underexplored, even though modern video models can leverage spatiotemporal motion patterns as biometric identifiers. To address this challenge, we propose a novel attention-driven spatiotemporal video anonymization framework based on systematic disentanglement of utility and privacy features. Our key insight is that attention mechanisms in Vision Transformers (ViTs) can be explicitly structured to separate action-relevant information from privacy-sensitive content. Building on this insight, we introduce two task-specific classification tokens, an action CLS token and a privacy CLS token, that learn complementary representations within a shared Transformer backbone. We contrast their attention distributions to compute a utility-privacy score for each spatiotemporal tubelet, and keep the top-k tubelets with the highest scores. This selectively prunes tubelets dominated by privacy cues while preserving those most critical for action recognition. Extensive experiments demonstrate that our approach maintains action recognition performance comparable to models trained on raw videos, while substantially reducing privacy leakage. These results indicate that attention-driven spatiotemporal pruning offers an effective and principled solution for privacy-preserving video analytics.