🤖 AI Summary
This paper introduces a novel paradigm for single-view, model-free, and correspondence-free 6D object pose estimation—eliminating reliance on CAD models, multi-stage regression, 2D–3D feature matching, or geometric priors such as depth, SfM, or PnP. Methodologically, it proposes the first Axis Generation (AG) framework: a diffusion model explicitly learns the joint distribution of three orthogonal axes; an Axis Generation Module (AGM) and a Tri-axis Back-projection Module (TBM) are designed to jointly recover pose via geometric consistency-aware gradient injection and noise-prediction optimization. Crucially, the method operates solely on a single RGB image, requiring no reference images or appearance priors. Evaluated across multiple benchmarks, it demonstrates strong cross-instance generalization, significantly improving deployment efficiency and robustness. This work establishes a scalable, open-world solution for 6D pose estimation.
📝 Abstract
Object pose estimation, which plays a vital role in robotics, augmented reality, and autonomous driving, has been of great interest in computer vision. Existing studies either require multi-stage pose regression or rely on 2D-3D feature matching. Though these approaches have shown promising results, they rely heavily on appearance information, requiring complex input (i.e., multi-view reference input, depth, or CAD models) and intricate pipeline (i.e., feature extraction-SfM-2D to 3D matching-PnP). We propose AxisPose, a model-free, matching-free, single-shot solution for robust 6D pose estimation, which fundamentally diverges from the existing paradigm. Unlike existing methods that rely on 2D-3D or 2D-2D matching using 3D techniques, such as SfM and PnP, AxisPose directly infers a robust 6D pose from a single view by leveraging a diffusion model to learn the latent axis distribution of objects without reference views. Specifically, AxisPose constructs an Axis Generation Module (AGM) to capture the latent geometric distribution of object axes through a diffusion model. The diffusion process is guided by injecting the gradient of geometric consistency loss into the noise estimation to maintain the geometric consistency of the generated tri-axis. With the generated tri-axis projection, AxisPose further adopts a Triaxial Back-projection Module (TBM) to recover the 6D pose from the object tri-axis. The proposed AxisPose achieves robust performance at the cross-instance level (i.e., one model for N instances) using only a single view as input without reference images, with great potential for generalization to unseen-object level.