π€ AI Summary
Driver distraction recognition faces dual domain shift challenges in real-world deployment: cross-view shifts (due to camera placement variations) and cross-modal shifts (caused by sensor or environmental changes). Existing methods typically address these issues separately, limiting generalizability and scalability. This paper proposes the first two-stage framework for joint unsupervised domain adaptation across both view and modality. Stage one employs contrastive learning to extract view-invariant yet action-discriminative spatiotemporal features. Stage two introduces an information bottleneck loss to achieve target-domain label-free domain alignment. Evaluated on the Drive&Act dataset using video Transformers (e.g., Video Swin, MViT), our method achieves a Top-1 accuracy of 89.2% under RGB inputβnearly 50% higher than supervised contrastive baselines and up to 5% higher than state-of-the-art single-shift adaptation methods. The framework significantly enhances robustness and deployability in realistic driving scenarios.
π Abstract
Driver distraction remains a leading cause of road traffic accidents, contributing to thousands of fatalities annually across the globe. While deep learning-based driver activity recognition methods have shown promise in detecting such distractions, their effectiveness in real-world deployments is hindered by two critical challenges: variations in camera viewpoints (cross-view) and domain shifts such as change in sensor modality or environment. Existing methods typically address either cross-view generalization or unsupervised domain adaptation in isolation, leaving a gap in the robust and scalable deployment of models across diverse vehicle configurations. In this work, we propose a novel two-phase cross-view, cross-modal unsupervised domain adaptation framework that addresses these challenges jointly on real-time driver monitoring data. In the first phase, we learn view-invariant and action-discriminative features within a single modality using contrastive learning on multi-view data. In the second phase, we perform domain adaptation to a new modality using information bottleneck loss without requiring any labeled data from the new domain. We evaluate our approach using state-of-the art video transformers (Video Swin, MViT) and multi modal driver activity dataset called Drive&Act, demonstrating that our joint framework improves top-1 accuracy on RGB video data by almost 50% compared to a supervised contrastive learning-based cross-view method, and outperforms unsupervised domain adaptation-only methods by up to 5%, using the same video transformer backbone.