🤖 AI Summary
This paper addresses unsupervised 3D keypoint estimation from a single image—without manual annotations or multi-view calibration data. The method leverages a pre-trained multi-view diffusion model as a geometric prior, generating synthetic multi-view images and intermediate features to construct self-supervised multi-view consistency signals. It introduces a 2D-to-3D feature volume mapping mechanism that explicitly encodes 3D structure, combined with feature matching and self-supervised optimization for robust keypoint regression. Evaluated on real-world benchmarks—including Human3.6M and Stanford Dogs—the approach significantly outperforms existing unsupervised methods in accuracy and generalization. Moreover, it enables controllable 3D editing of generated objects. By unifying generative priors with geometric reasoning, the framework establishes a scalable, annotation-free paradigm for unsupervised 3D pose estimation.
📝 Abstract
This paper introduces KeyDiff3D, a framework for unsupervised monocular 3D keypoints estimation that accurately predicts 3D keypoints from a single image. While previous methods rely on manual annotations or calibrated multi-view images, both of which are expensive to collect, our method enables monocular 3D keypoints estimation using only a collection of single-view images. To achieve this, we leverage powerful geometric priors embedded in a pretrained multi-view diffusion model. In our framework, this model generates multi-view images from a single image, serving as a supervision signal to provide 3D geometric cues to our model. We also use the diffusion model as a powerful 2D multi-view feature extractor and construct 3D feature volumes from its intermediate representations. This transforms implicit 3D priors learned by the diffusion model into explicit 3D features. Beyond accurate keypoints estimation, we further introduce a pipeline that enables manipulation of 3D objects generated by the diffusion model. Experimental results on diverse aspects and datasets, including Human3.6M, Stanford Dogs, and several in-the-wild and out-of-domain datasets, highlight the effectiveness of our method in terms of accuracy, generalization, and its ability to enable manipulation of 3D objects generated by the diffusion model from a single image.