🤖 AI Summary
This work addresses the challenge of natural language-driven 3D environmental understanding for autonomous vehicles and robots, introducing— for the first time—the 3D referring expression comprehension (3D REC) task based on 4D millimeter-wave radar point clouds. To support this task, we present Talk2Radar, the first large-scale radar–language paired dataset (8.7K samples). We further propose T-RadarNet: a model that employs a deformable Feature Pyramid Network (Deformable-FPN) to extract multi-scale point cloud features and incorporates a graph-gated cross-modal fusion module to achieve fine-grained radar–text alignment and precise 3D localization. This work establishes the first benchmark for radar-based referring comprehension; our method achieves state-of-the-art performance on Talk2Radar. All data, models, and code are publicly released, offering a novel paradigm for robust multimodal perception and language grounding in embodied intelligence.
📝 Abstract
Embodied perception is essential for intelligent vehicles and robots in interactive environmental understanding. However, these advancements primarily focus on vision, with limited attention given to using 3D modeling sensors, restricting a comprehensive understanding of objects in response to prompts containing qualitative and quantitative queries. Recently, as a promising automotive sensor with affordable cost, 4D millimeter-wave radars provide denser point clouds than conventional radars and perceive both semantic and physical characteristics of objects, thereby enhancing the reliability of perception systems. To foster the development of natural language-driven context understanding in radar scenes for 3D visual grounding, we construct the first dataset, Talk2Radar, which bridges these two modalities for 3D Referring Expression Comprehension (REC). Talk2Radar contains 8,682 referring prompt samples with 20,558 referred objects. Moreover, we propose a novel model, T-RadarNet, for 3D REC on point clouds, achieving State-Of-The-Art (SOTA) performance on the Talk2Radar dataset compared to counterparts. Deformable-FPN and Gated Graph Fusion are meticulously designed for efficient point cloud feature modeling and cross-modal fusion between radar and text features, respectively. Comprehensive experiments provide deep insights into radar-based 3D REC. We release our project at https://github.com/GuanRunwei/Talk2Radar.