🤖 AI Summary
This work addresses the challenging problem of pose estimation and 3D reconstruction for non-cooperative space targets under severe conditions—including monocular imaging, unknown initial pose, limited field-of-view, and non-Lambertian illumination. To this end, we propose an end-to-end framework that jointly optimizes a neural radiance field (NeRF) and camera poses. Our method integrates differentiable rendering with frame-wise pose refinement and introduces a rotation-consistency regularizer to mitigate pose ambiguity and reconstruction degradation caused by sparse observations. Compared to conventional sequential optimization pipelines, our approach achieves higher geometric reconstruction accuracy and more stable pose convergence on synthetic space imagery. It significantly enhances robustness of 3D perception and improves spatial situational awareness under weakly supervised conditions.
📝 Abstract
Obtaining a better knowledge of the current state and behavior of objects orbiting Earth has proven to be essential for a range of applications such as active debris removal, in-orbit maintenance, or anomaly detection. 3D models represent a valuable source of information in the field of Space Situational Awareness (SSA). In this work, we leveraged Neural Radiance Fields (NeRF) to perform 3D reconstruction of non-cooperative space objects from simulated images. This scenario is challenging for NeRF models due to unusual camera characteristics and environmental conditions : mono-chromatic images, unknown object orientation, limited viewing angles, absence of diffuse lighting etc. In this work we focus primarly on the joint optimization of camera poses alongside the NeRF. Our experimental results show that the most accurate 3D reconstruction is achieved when training with successive images one-by-one. We estimate camera poses by optimizing an uniform rotation and use regularization to prevent successive poses from being too far apart.