Navigation Around Unknown Space Objects Using Visible-Thermal Image Fusion

📅 2025-12-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address autonomous navigation of unknown space objects (e.g., debris, spacecraft) under complex illumination conditions—including eclipse and shadow—this paper proposes a physically consistent, pixel-level adaptive fusion method for visible-light and thermal-infrared imagery, integrated into an enhanced ORB-SLAM2 framework. We introduce the first dual-band physically based rendering simulator tailored to space objects, design a multi-scale feature-weighted fusion network, and establish a joint illumination-trajectory evaluation protocol. Experiments in low Earth orbit simulation scenarios demonstrate that the fused imagery reduces average SLAM position error by 62% compared to visible-light-only input and by 47% compared to thermal-infrared-only input. The approach significantly improves robustness, accuracy, and continuity of pose estimation across the full illumination cycle, overcoming the fundamental limitation of single-modality sensors failing in shadowed regions.

Technology Category

Application Category

📝 Abstract
As the popularity of on-orbit operations grows, so does the need for precise navigation around unknown resident space objects (RSOs) such as other spacecraft, orbital debris, and asteroids. The use of Simultaneous Localization and Mapping (SLAM) algorithms is often studied as a method to map out the surface of an RSO and find the inspector's relative pose using a lidar or conventional camera. However, conventional cameras struggle during eclipse or shadowed periods, and lidar, though robust to lighting conditions, tends to be heavier, bulkier, and more power-intensive. Thermal-infrared cameras can track the target RSO throughout difficult illumination conditions without these limitations. While useful, thermal-infrared imagery lacks the resolution and feature-richness of visible cameras. In this work, images of a target satellite in low Earth orbit are photo-realistically simulated in both visible and thermal-infrared bands. Pixel-level fusion methods are used to create visible/thermal-infrared composites that leverage the best aspects of each camera. Navigation errors from a monocular SLAM algorithm are compared between visible, thermal-infrared, and fused imagery in various lighting and trajectories. Fused imagery yields substantially improved navigation performance over visible-only and thermal-only methods.
Problem

Research questions and friction points this paper is trying to address.

Navigation around unknown space objects using image fusion
Overcoming lighting limitations in conventional and thermal cameras
Improving SLAM accuracy with visible-thermal composite imagery
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visible-thermal image fusion for navigation
Monocular SLAM with pixel-level fusion
Simulated visible-thermal composites improve performance
🔎 Similar Papers
No similar papers found.
E
Eric J. Elias
Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
M
Michael Esswein
The Charles Stark Draper Laboratory Inc., Cambridge, MA 02139, USA
Jonathan P. How
Jonathan P. How
Ford Professor of Engineering, AA Dept., Massachusetts Institute of Technology
Control systemsMulti-agent systemsAerial RoboticsSensor FusionAutonomous Driving
D
David W. Miller
Massachusetts Institute of Technology, Cambridge, MA, 02139, USA