🤖 AI Summary
Existing BEV perception and natural language description tasks operate in isolation, lacking explicit multimodal alignment, which hinders synergistic 3D understanding and semantic caption generation. To address this, we propose a unified multimodal task alignment framework featuring two novel mechanisms: BEV–Language Alignment (BLA) and Detection–Caption Alignment (DCA). These mechanisms jointly optimize perception and captioning without incurring additional inference overhead. Our approach integrates contextual representation alignment, cross-modal prompt learning, and end-to-end joint training, ensuring compatibility with mainstream BEV backbone models. Extensive experiments on nuScenes and TOD3Cap demonstrate consistent improvements: +4.9% mAP for BEV detection and +9.2% CIDEr score for captioning quality—substantially outperforming state-of-the-art methods. This work establishes the first systematic framework for aligning geometric perception and linguistic generation in autonomous driving scenes.
📝 Abstract
Bird's eye view (BEV)-based 3D perception plays a crucial role in autonomous driving applications. The rise of large language models has spurred interest in BEV-based captioning to understand object behavior in the surrounding environment. However, existing approaches treat perception and captioning as separate tasks, focusing on the performance of only one of the tasks and overlooking the potential benefits of multimodal alignment. To bridge this gap between modalities, we introduce MTA, a novel multimodal task alignment framework that boosts both BEV perception and captioning. MTA consists of two key components: (1) BEV-Language Alignment (BLA), a contextual learning mechanism that aligns the BEV scene representations with ground-truth language representations, and (2) Detection-Captioning Alignment (DCA), a cross-modal prompting mechanism that aligns detection and captioning outputs. MTA integrates into state-of-the-art baselines during training, adding no extra computational complexity at runtime. Extensive experiments on the nuScenes and TOD3Cap datasets show that MTA significantly outperforms state-of-the-art baselines, achieving a 4.9% improvement in perception and a 9.2% improvement in captioning. These results underscore the effectiveness of unified alignment in reconciling BEV-based perception and captioning.