extsc{T-Mask}: Temporal Masking for Probing Foundation Models across Camera Views in Driver Monitoring

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In driver monitoring, severe viewpoint variations in camera captures critically impair model generalization across viewpoints. To address this, we propose T-Mask—a lightweight image-to-video probing method that enhances temporal modeling capability of foundation models (e.g., DINOv2, CLIP) without introducing additional parameters. T-Mask employs a temporal token masking mechanism to explicitly emphasize key dynamic regions in video sequences and integrates linear probing with parameter-efficient fine-tuning (PEFT) for efficient video-level inference. On the Drive&Act benchmark, T-Mask achieves a 1.23% absolute improvement in cross-view top-1 accuracy over strong probing baselines and outperforms mainstream PEFT methods by 8.0%. For sparse behaviors (i.e., secondary activities), it improves recognition accuracy by 5.42% (in-view) and 1.36% (cross-view), significantly enhancing robustness to unseen viewpoints and fine-grained action discrimination.

Technology Category

Application Category

📝 Abstract
Changes of camera perspective are a common obstacle in driver monitoring. While deep learning and pretrained foundation models show strong potential for improved generalization via lightweight adaptation of the final layers ('probing'), their robustness to unseen viewpoints remains underexplored. We study this challenge by adapting image foundation models to driver monitoring using a single training view, and evaluating them directly on unseen perspectives without further adaptation. We benchmark simple linear probes, advanced probing strategies, and compare two foundation models (DINOv2 and CLIP) against parameter-efficient fine-tuning (PEFT) and full fine-tuning. Building on these insights, we introduce extsc{T-Mask} -- a new image-to-video probing method that leverages temporal token masking and emphasizes more dynamic video regions. Benchmarked on the public Drive&Act dataset, extsc{T-Mask} improves cross-view top-1 accuracy by $+1.23%$ over strong probing baselines and $+8.0%$ over PEFT methods, without adding any parameters. It proves particularly effective for underrepresented secondary activities, boosting recognition by $+5.42%$ under the trained view and $+1.36%$ under cross-view settings. This work provides encouraging evidence that adapting foundation models with lightweight probing methods like extsc{T-Mask} has strong potential in fine-grained driver observation, especially in cross-view and low-data settings. These results highlight the importance of temporal token selection when leveraging foundation models to build robust driver monitoring systems. Code and models will be made available at https://github.com/th-nesh/T-MASK to support ongoing research.
Problem

Research questions and friction points this paper is trying to address.

Improving cross-view robustness in driver monitoring systems
Adapting foundation models to unseen camera perspectives
Enhancing recognition accuracy with temporal token masking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temporal token masking for cross-view adaptation
Image-to-video probing emphasizing dynamic regions
Parameter-free method improving cross-view accuracy
🔎 Similar Papers
No similar papers found.