🤖 AI Summary
Transparent objects—characterized by low texture, high reflectivity, and strong refraction—severely degrade depth estimation from standard 3D sensors (e.g., stereo cameras), hindering robotic manipulation tasks such as grasping. To address this, we propose a structural-feature-guided post-fusion module that effectively integrates depth maps with geometric priors. We further design a parameter-aligned, physically grounded Sim2Real simulation framework to drastically reduce reliance on scarce real-world transparent-object data. Our method synergistically incorporates visual Transformers, domain-adaptive stereo matching, and AI-accelerated physically based rendering. Evaluated across laboratory and real-world settings, the approach achieves sub-centimeter depth reconstruction accuracy on diverse transparent containers under complex illumination conditions. It enables stable, robust robotic manipulation and establishes a transferable technical paradigm for transparent-object perception.
📝 Abstract
Transparent object depth perception poses a challenge in everyday life and logistics, primarily due to the inability of standard 3D sensors to accurately capture depth on transparent or reflective surfaces. This limitation significantly affects depth map and point cloud-reliant applications, especially in robotic manipulation. We developed a vision transformer-based algorithm for stereo depth recovery of transparent objects. This approach is complemented by an innovative feature post-fusion module, which enhances the accuracy of depth recovery by structural features in images. To address the high costs associated with dataset collection for stereo camera-based perception of transparent objects, our method incorporates a parameter-aligned, domain-adaptive, and physically realistic Sim2Real simulation for efficient data generation, accelerated by AI algorithm. Our experimental results demonstrate the model's exceptional Sim2Real generalizability in real-world scenarios, enabling precise depth mapping of transparent objects to assist in robotic manipulation. Project details are available at https://sites.google.com/view/cleardepth/ .