🤖 AI Summary
This work addresses the challenge of negative transfer in jointly learning heterogeneous perception tasks—such as driver behavior, emotion, vehicle dynamics, and traffic scene understanding—which can degrade the performance of advanced driver assistance systems. To mitigate this issue, the authors propose UV-M3TL, a unified multimodal multitask learning framework that features a novel dual-branch spatial-channel multimodal embedding (DB-SCME) module to explicitly disentangle shared and task-specific features. Additionally, an adaptive feature decoupling loss (AFD-Loss) is introduced to enable dynamic weighting and stable joint optimization. The proposed approach effectively alleviates task conflicts and enhances representational diversity, achieving state-of-the-art performance across all four tasks on the AIDE dataset and demonstrating strong generalization and superiority on public benchmarks including BDD100K and Cityscapes.
📝 Abstract
Advanced Driver Assistance Systems (ADAS) need to understand human driver behavior while perceiving their navigation context, but jointly learning these heterogeneous tasks would cause inter-task negative transfer and impair system performance. Here, we propose a Unified and Versatile Multimodal Multi-Task Learning (UV-M3TL) framework to simultaneously recognize driver behavior, driver emotion, vehicle behavior, and traffic context, while mitigating inter-task negative transfer. Our framework incorporates two core components: dual-branch spatial channel multimodal embedding (DB-SCME) and adaptive feature-decoupled multi-task loss (AFD-Loss). DB-SCME enhances cross-task knowledge transfer while mitigating task conflicts by employing a dual-branch structure to explicitly model salient task-shared and task-specific features. AFD-Loss improves the stability of joint optimization while guiding the model to learn diverse multi-task representations by introducing an adaptive weighting mechanism based on learning dynamics and feature decoupling constraints. We evaluate our method on the AIDE dataset, and the experimental results demonstrate that UV-M3TL achieves state-of-the-art performance across all four tasks. To further prove the versatility, we evaluate UV-M3TL on additional public multi-task perception benchmarks (BDD100K, CityScapes, NYUD-v2, and PASCAL-Context), where it consistently delivers strong performance across diverse task combinations, attaining state-of-the-art results on most tasks.