Talk2Radar: Bridging Natural Language with 4D mmWave Radar for 3D Referring Expression Comprehension

📅 2024-05-21
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of natural language-driven 3D environmental understanding for autonomous vehicles and robots, introducing— for the first time—the 3D referring expression comprehension (3D REC) task based on 4D millimeter-wave radar point clouds. To support this task, we present Talk2Radar, the first large-scale radar–language paired dataset (8.7K samples). We further propose T-RadarNet: a model that employs a deformable Feature Pyramid Network (Deformable-FPN) to extract multi-scale point cloud features and incorporates a graph-gated cross-modal fusion module to achieve fine-grained radar–text alignment and precise 3D localization. This work establishes the first benchmark for radar-based referring comprehension; our method achieves state-of-the-art performance on Talk2Radar. All data, models, and code are publicly released, offering a novel paradigm for robust multimodal perception and language grounding in embodied intelligence.

Technology Category

Application Category

📝 Abstract
Embodied perception is essential for intelligent vehicles and robots in interactive environmental understanding. However, these advancements primarily focus on vision, with limited attention given to using 3D modeling sensors, restricting a comprehensive understanding of objects in response to prompts containing qualitative and quantitative queries. Recently, as a promising automotive sensor with affordable cost, 4D millimeter-wave radars provide denser point clouds than conventional radars and perceive both semantic and physical characteristics of objects, thereby enhancing the reliability of perception systems. To foster the development of natural language-driven context understanding in radar scenes for 3D visual grounding, we construct the first dataset, Talk2Radar, which bridges these two modalities for 3D Referring Expression Comprehension (REC). Talk2Radar contains 8,682 referring prompt samples with 20,558 referred objects. Moreover, we propose a novel model, T-RadarNet, for 3D REC on point clouds, achieving State-Of-The-Art (SOTA) performance on the Talk2Radar dataset compared to counterparts. Deformable-FPN and Gated Graph Fusion are meticulously designed for efficient point cloud feature modeling and cross-modal fusion between radar and text features, respectively. Comprehensive experiments provide deep insights into radar-based 3D REC. We release our project at https://github.com/GuanRunwei/Talk2Radar.
Problem

Research questions and friction points this paper is trying to address.

Bridging natural language with 4D radar
Enhancing 3D object comprehension in radar
Developing natural language-driven radar context understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

4D millimeter-wave radar
Deformable-FPN
Gated Graph Fusion
🔎 Similar Papers
No similar papers found.
Runwei Guan
Runwei Guan
Hong Kong University of Science and Technology (Guangzhou) / Founder of FertiTech AI
Multi-Modal LearningUnmanned Surface VesselRadar PerceptionAI Medicine
R
Ruixiao Zhang
Department of EEE, University of
N
Ningwei Ouyang
Institute of Deep Perception Technology, JITRI, Wuxi, China; Department of EEE, University of
Jianan Liu
Jianan Liu
Unknown affiliation
Signal ProcessingDeep LearningSensing and PerceptionAutonomous DrivingMedical Imaging
Ka Lok Man
Ka Lok Man
Professor, Xi'an Jiaotong-Liverpool University
Xiaohao Cai
Xiaohao Cai
School of Electronics and Computer Science, University of Southampton
Image ProcessingComputer VisionMachine LearningOptimisation
M
Ming Xu
Department of EEE, University of
J
Jeremy S. Smith
Department of EEE, University of
E
Eng Gee Lim
Senior Member, IEEE
Y
Yutao Yue
Institute of Deep Perception Technology, JITRI, Wuxi, China; Department of EEE, University of
Hui Xiong
Hui Xiong
Senior Scientist, Candela Corporation
Ultrafast dynamicsatomic molecular physicsfree electron laser