🤖 AI Summary
Addressing the challenge of simultaneously achieving high accuracy and generalization to unseen CAD models in in-hand object pose estimation, this paper proposes a simulation-based visuo-tactile fusion framework. Methodologically, it introduces an energy-based diffusion model into a render-compare architecture for the first time, unifying candidate sampling, iterative optimization, and post-ranking within a fully differentiable, end-to-end trainable pipeline. The approach leverages purely synthetic visuo-tactile data and self-supervised optimization, eliminating the need for real-world annotations. Key contributions include: (1) enabling category-level generalization with explicit uncertainty quantification; (2) substantially improving sim-to-real transfer performance; and (3) outperforming existing regression-, matching-, and registration-based methods across accuracy, robustness, and cross-category generalization—particularly on high-precision tasks such as USB plug insertion.
📝 Abstract
Accurate estimation of the in-hand pose of an object based on its CAD model is crucial in both industrial applications and everyday tasks, ranging from positioning workpieces and assembling components to seamlessly inserting devices like USB connectors. While existing methods often rely on regression, feature matching, or registration techniques, achieving high precision and generalizability to unseen CAD models remains a significant challenge. In this paper, we propose a novel three-stage framework for in-hand pose estimation. The first stage involves sampling and pre-ranking pose candidates, followed by iterative refinement of these candidates in the second stage. In the final stage, post-ranking is applied to identify the most likely pose candidates. These stages are governed by a unified energy-based diffusion model, which is trained solely on simulated data. This energy model simultaneously generates gradients to refine pose estimates and produces an energy scalar that quantifies the quality of the pose estimates. Additionally, borrowing the idea from the computer vision domain, we incorporate a render-compare architecture within the energy-based score network to significantly enhance sim-to-real performance, as demonstrated by our ablation studies. We conduct comprehensive experiments to show that our method outperforms conventional baselines based on regression, matching, and registration techniques, while also exhibiting strong intra-category generalization to previously unseen CAD models. Moreover, our approach integrates tactile object pose estimation, pose tracking, and uncertainty estimation into a unified framework, enabling robust performance across a variety of real-world conditions.