A Survey of Multi-sensor Fusion Perception for Embodied AI: Background, Methods, Challenges and Prospects

๐Ÿ“… 2025-06-24
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing surveys on multi-sensor fusion perception (MSFP) predominantly focus on single tasks (e.g., 3D detection) or isolated fusion dimensions (e.g., multimodality), lacking cross-task generalizability and methodological diversity. To address this, we propose the first task-agnostic, unified MSFP framework for embodied intelligence, systematically integrating four foundational fusion paradigms: multimodal fusion, multi-view fusion, temporal modeling, and multi-agent collaboration. We introduce a comprehensive taxonomy spanning all fusion dimensions, exposing critical gapsโ€”including cross-modal alignment, long-horizon temporal modeling, and distributed coordination. Furthermore, we analyze emerging large-model-driven paradigms, such as vision-language model (VLM)-guided joint perception-decision optimization. This survey provides theoretical foundations and practical guidelines for downstream tasks including 3D object detection and semantic segmentation, advancing embodied perception systems toward greater generality, robustness, and collaborative intelligence.

Technology Category

Application Category

๐Ÿ“ Abstract
Multi-sensor fusion perception (MSFP) is a key technology for embodied AI, which can serve a variety of downstream tasks (e.g., 3D object detection and semantic segmentation) and application scenarios (e.g., autonomous driving and swarm robotics). Recently, impressive achievements on AI-based MSFP methods have been reviewed in relevant surveys. However, we observe that the existing surveys have some limitations after a rigorous and detailed investigation. For one thing, most surveys are oriented to a single task or research field, such as 3D object detection or autonomous driving. Therefore, researchers in other related tasks often find it difficult to benefit directly. For another, most surveys only introduce MSFP from a single perspective of multi-modal fusion, while lacking consideration of the diversity of MSFP methods, such as multi-view fusion and time-series fusion. To this end, in this paper, we hope to organize MSFP research from a task-agnostic perspective, where methods are reported from various technical views. Specifically, we first introduce the background of MSFP. Next, we review multi-modal and multi-agent fusion methods. A step further, time-series fusion methods are analyzed. In the era of LLM, we also investigate multimodal LLM fusion methods. Finally, we discuss open challenges and future directions for MSFP. We hope this survey can help researchers understand the important progress in MSFP and provide possible insights for future research.
Problem

Research questions and friction points this paper is trying to address.

Survey limitations in multi-sensor fusion perception for diverse tasks
Lack of comprehensive MSFP method diversity in existing surveys
Need for task-agnostic organization of MSFP research perspectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task-agnostic MSFP organization approach
Multi-modal and multi-agent fusion methods
Time-series and LLM fusion techniques
๐Ÿ”Ž Similar Papers
No similar papers found.
Shulan Ruan
Shulan Ruan
Tsinghua University
Computer VisionSentiment AnalysisText-to-ImageAI for Medical
R
Rongwei Wang
Tsinghua University
X
Xuchen Shen
Tsinghua University
H
Huijie Liu
University of Science and Technology of China
B
Baihui Xiao
Tsinghua University
J
Jun Shi
University of Science and Technology of China
K
Kun Zhang
Hefei University of Technology
Zhenya Huang
Zhenya Huang
University of Science and Technology of China
Data ScienceAIKnowledge RepresentationCognitive ReasoningIntelligent Education
Y
Yu Liu
Tsinghua University
Enhong Chen
Enhong Chen
University of Science and Technology of China
data miningrecommender systemmachine learning
Y
You He
Tsinghua University