๐ค AI Summary
Existing surveys on multi-sensor fusion perception (MSFP) predominantly focus on single tasks (e.g., 3D detection) or isolated fusion dimensions (e.g., multimodality), lacking cross-task generalizability and methodological diversity. To address this, we propose the first task-agnostic, unified MSFP framework for embodied intelligence, systematically integrating four foundational fusion paradigms: multimodal fusion, multi-view fusion, temporal modeling, and multi-agent collaboration. We introduce a comprehensive taxonomy spanning all fusion dimensions, exposing critical gapsโincluding cross-modal alignment, long-horizon temporal modeling, and distributed coordination. Furthermore, we analyze emerging large-model-driven paradigms, such as vision-language model (VLM)-guided joint perception-decision optimization. This survey provides theoretical foundations and practical guidelines for downstream tasks including 3D object detection and semantic segmentation, advancing embodied perception systems toward greater generality, robustness, and collaborative intelligence.
๐ Abstract
Multi-sensor fusion perception (MSFP) is a key technology for embodied AI, which can serve a variety of downstream tasks (e.g., 3D object detection and semantic segmentation) and application scenarios (e.g., autonomous driving and swarm robotics). Recently, impressive achievements on AI-based MSFP methods have been reviewed in relevant surveys. However, we observe that the existing surveys have some limitations after a rigorous and detailed investigation. For one thing, most surveys are oriented to a single task or research field, such as 3D object detection or autonomous driving. Therefore, researchers in other related tasks often find it difficult to benefit directly. For another, most surveys only introduce MSFP from a single perspective of multi-modal fusion, while lacking consideration of the diversity of MSFP methods, such as multi-view fusion and time-series fusion. To this end, in this paper, we hope to organize MSFP research from a task-agnostic perspective, where methods are reported from various technical views. Specifically, we first introduce the background of MSFP. Next, we review multi-modal and multi-agent fusion methods. A step further, time-series fusion methods are analyzed. In the era of LLM, we also investigate multimodal LLM fusion methods. Finally, we discuss open challenges and future directions for MSFP. We hope this survey can help researchers understand the important progress in MSFP and provide possible insights for future research.