๐ค AI Summary
For long-term monitoring of hazardous underwater infrastructure, micro-robot swarms face high operational costs, elevated safety risks, and severe pose drift and spatiotemporal misalignment in imagery due to environmental disturbances. To address these challenges, this paper proposes an end-to-end deep visual modeling framework. Our approach innovatively integrates synthetic-data-driven simulation, a multimodal coordinate prediction network (jointly processing RGB images, semantic masks, and noisy pose inputs), and a geometry-constrained visionโgeometry co-optimization mechanism. This enables robust cross-view spatiotemporal image alignment and reconstruction. Evaluated on simulated underwater tasks, the framework significantly improves pose estimation accuracy and image stitching consistency under noise and rotational disturbances, producing clear, temporally coherent visual models of infrastructure status. Results demonstrate both practical efficacy and deployability in extreme underwater environments.
๐ Abstract
Long-term monitoring and exploration of extreme environments, such as underwater storage facilities, is costly, labor-intensive, and hazardous. Automating this process with low-cost, collaborative robots can greatly improve efficiency. These robots capture images from different positions, which must be processed simultaneously to create a spatio-temporal model of the facility. In this paper, we propose a novel approach that integrates data simulation, a multi-modal deep learning network for coordinate prediction, and image reassembly to address the challenges posed by environmental disturbances causing drift and rotation in the robots' positions and orientations. Our approach enhances the precision of alignment in noisy environments by integrating visual information from snapshots, global positional context from masks, and noisy coordinates. We validate our method through extensive experiments using synthetic data that simulate real-world robotic operations in underwater settings. The results demonstrate very high coordinate prediction accuracy and plausible image assembly, indicating the real-world applicability of our approach. The assembled images provide clear and coherent views of the underwater environment for effective monitoring and inspection, showcasing the potential for broader use in extreme settings, further contributing to improved safety, efficiency, and cost reduction in hazardous field monitoring. Code is available on https://github.com/ChrisChen1023/Micro-Robot-Swarm.