ReFlow6D: Refraction-Guided Transparent Object 6D Pose Estimation via Intermediate Representation Learning

📅 2024-11-01
🏛️ IEEE Robotics and Automation Letters
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Estimating the 6D pose (3D position + 3D orientation) of transparent objects from RGB images alone remains challenging under complex illumination due to severe appearance ambiguity and lack of reliable geometric cues. To address this, we propose a refraction-guided intermediate representation learning framework: (1) modeling light-path distortion induced by refraction and reflection to construct environment-agnostic, depth-agnostic, and appearance-invariant features; (2) introducing a transparent-object-specific synthesis loss to enhance discriminability of the intermediate representation; and (3) designing an end-to-end RGB-only pose regression network. Our key contribution is the first integration of physically grounded light-path modeling into 6D pose estimation—eliminating reliance on depth sensors or appearance variations. Extensive experiments demonstrate significant improvements over state-of-the-art methods on TOD and Trans32K-6D benchmarks. Furthermore, real-world robotic grasping evaluations confirm that our high-accuracy pose estimates directly translate into substantially improved manipulation success rates.

Technology Category

Application Category

📝 Abstract
Transparent objects are ubiquitous in daily life, making their perception and robotics manipulation important. However, they present a major challenge due to their distinct refractive and reflective properties when it comes to accurately estimating the 6D pose. To solve this, we present ReFlow6D, a novel method for transparent object 6D pose estimation that harnesses the refractive-intermediate representation. Unlike conventional approaches, our method leverages a feature space impervious to changes in RGB image space and independent of depth information. Drawing inspiration from image matting, we model the deformation of the light path through transparent objects, yielding a unique object-specific intermediate representation guided by light refraction that is independent of the environment in which objects are observed. By integrating these intermediate features into the pose estimation network, we show that ReFlow6D achieves precise 6D pose estimation of transparent objects, using only RGB images as input. Our method further introduces a novel transparent object compositing loss, fostering the generation of superior refractive-intermediate features. Empirical evaluations show that our approach significantly outperforms state-of-the-art methods on TOD and Trans32K-6D datasets. Robot grasping experiments further demonstrate that ReFlow6D's pose estimation accuracy effectively translates to real-world robotics task.
Problem

Research questions and friction points this paper is trying to address.

6D pose estimation
transparent objects
complex lighting conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

ReFlow6D
Transparent Object Pose Estimation
Robust Training Strategy
🔎 Similar Papers
No similar papers found.
H
Hrishikesh Gupta
Vision for Robotics Laboratory, Automation and Control Institute, TU Wien, Austria
Stefan Thalhammer
Stefan Thalhammer
UAS Technikum Vienna
Computer VisionRoboticsMachine Learning
Jean-Baptiste Weibel
Jean-Baptiste Weibel
BOKU University
Computer Vision for Robotics3D Vision
A
Alexander Haberl
Vision for Robotics Laboratory, Automation and Control Institute, TU Wien, Austria
Markus Vincze
Markus Vincze
TU Wien
Robot visionhome roboticsmaking robots see