Edge-Assisted Multi-Robot Visual-Inertial SLAM with Efficient Communication

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance bottlenecks in multi-robot SLAM systems caused by limited onboard computational resources, constrained communication bandwidth, and latency from long cloud links. The authors propose a centralized multi-robot visual-inertial SLAM framework based on a robot-edge-cloud hierarchical architecture. Their approach innovatively integrates a pyramid IMU-assisted lightweight optical flow tracker and employs lossless compression encoding of feature points and keyframe descriptors, substantially reducing both computational overhead and data transmission volume. Experimental results on the EuRoC dataset demonstrate that the proposed method achieves comparable or superior localization accuracy under limited bandwidth while significantly lowering communication costs and alleviating onboard computational load.

Technology Category

Application Category

📝 Abstract
The integration of cloud computing and edge computing is an effective way to achieve global consistent and real-time multi-robot Simultaneous Localization and Mapping (SLAM). Cloud computing effectively solves the problem of limited computing, communication and storage capacity of terminal equipment. However, limited bandwidth and extremely long communication links between terminal devices and the cloud result in serious performance degradation of multi-robot SLAM systems. To reduce the computational cost of feature tracking and improve the real-time performance of the robot, a lightweight SLAM method of optical flow tracking based on pyramid IMU prediction is proposed. On this basis, a centralized multi-robot SLAM system based on a robot-edge-cloud layered architecture is proposed to realize real-time collaborative SLAM. It avoids the problems of limited on-board computing resources and low execution efficiency of single robot. In this framework, only the feature points and keyframe descriptors are transmitted and lossless encoding and compression are carried out to realize real-time remote information transmission with limited bandwidth resources. This design reduces the actual bandwidth occupied in the process of data transmission, and does not cause the loss of SLAM accuracy caused by data compression. Through experimental verification on the EuRoC dataset, compared with the current most advanced local feature compression method, our method can achieve lower data volume feature transmission, and compared with the current advanced centralized multi-robot SLAM scheme, it can achieve the same or better positioning accuracy under low computational load.
Problem

Research questions and friction points this paper is trying to address.

multi-robot SLAM
edge computing
communication bottleneck
resource-constrained robots
real-time localization
Innovation

Methods, ideas, or system contributions that make the work stand out.

edge-assisted SLAM
lightweight visual-inertial odometry
lossless feature compression
multi-robot collaboration
pyramid IMU prediction
🔎 Similar Papers
No similar papers found.
Xin Liu
Xin Liu
Professor of Computer Science, Huaqiao University
multimedia analysispattern recognitiondata mining
S
Shuhuan Wen
Department of Key Lab of Industrial Computer Control Engineering of Hebei Province, Engineering Research Center of the Ministry of Education for Intelligent Control System and Intelligent Equipment, Yanshan University, Qinhuangdao, 066004, China
J
Jing Zhao
Department of Key Lab of Industrial Computer Control Engineering of Hebei Province, Engineering Research Center of the Ministry of Education for Intelligent Control System and Intelligent Equipment, Yanshan University, Qinhuangdao, 066004, China
T
Tony Z. Qiu
Intelligent Transport System Research Center, Wuhan University of Technology, Wuhan, China, and the Department of Civil and Environmental Engineering at the University of Alberta, Canada
Hong Zhang
Hong Zhang
Chair Professor, SUSTech; Professor Emeritus, University of Alberta
roboticscomputer visionimage processing