🤖 AI Summary
To address autonomous navigation of unknown space objects (e.g., debris, spacecraft) under complex illumination conditions—including eclipse and shadow—this paper proposes a physically consistent, pixel-level adaptive fusion method for visible-light and thermal-infrared imagery, integrated into an enhanced ORB-SLAM2 framework. We introduce the first dual-band physically based rendering simulator tailored to space objects, design a multi-scale feature-weighted fusion network, and establish a joint illumination-trajectory evaluation protocol. Experiments in low Earth orbit simulation scenarios demonstrate that the fused imagery reduces average SLAM position error by 62% compared to visible-light-only input and by 47% compared to thermal-infrared-only input. The approach significantly improves robustness, accuracy, and continuity of pose estimation across the full illumination cycle, overcoming the fundamental limitation of single-modality sensors failing in shadowed regions.
📝 Abstract
As the popularity of on-orbit operations grows, so does the need for precise navigation around unknown resident space objects (RSOs) such as other spacecraft, orbital debris, and asteroids. The use of Simultaneous Localization and Mapping (SLAM) algorithms is often studied as a method to map out the surface of an RSO and find the inspector's relative pose using a lidar or conventional camera. However, conventional cameras struggle during eclipse or shadowed periods, and lidar, though robust to lighting conditions, tends to be heavier, bulkier, and more power-intensive. Thermal-infrared cameras can track the target RSO throughout difficult illumination conditions without these limitations. While useful, thermal-infrared imagery lacks the resolution and feature-richness of visible cameras. In this work, images of a target satellite in low Earth orbit are photo-realistically simulated in both visible and thermal-infrared bands. Pixel-level fusion methods are used to create visible/thermal-infrared composites that leverage the best aspects of each camera. Navigation errors from a monocular SLAM algorithm are compared between visible, thermal-infrared, and fused imagery in various lighting and trajectories. Fused imagery yields substantially improved navigation performance over visible-only and thermal-only methods.