🤖 AI Summary
Transparent object perception suffers from severe degradation in depth estimation and semantic segmentation performance due to complex optical properties (e.g., refraction, reflection), while existing multi-task approaches are prone to detrimental task interference. To address this, we propose an edge-guided spatial attention mechanism that explicitly models the geometric–semantic correlation within transparent regions. Furthermore, we design a label-efficient, progressive multimodal training strategy that requires no ground-truth depth supervision: using only RGB input, the model is progressively guided to learn depth map edge structures. Our framework jointly optimizes monocular depth estimation and semantic segmentation. Evaluated on Syn-TODD and ClearPose benchmarks, it surpasses the state-of-the-art MODEST method, achieving a 12.7% relative improvement in depth accuracy over transparent regions while maintaining high segmentation accuracy.
📝 Abstract
Transparent object perception remains a major challenge in computer vision research, as transparency confounds both depth estimation and semantic segmentation. Recent work has explored multi-task learning frameworks to improve robustness, yet negative cross-task interactions often hinder performance. In this work, we introduce Edge-Guided Spatial Attention (EGSA), a fusion mechanism designed to mitigate destructive interactions by incorporating boundary information into the fusion between semantic and geometric features. On both Syn-TODD and ClearPose benchmarks, EGSA consistently improved depth accuracy over the current state of the art method (MODEST), while preserving competitive segmentation performance, with the largest improvements appearing in transparent regions. Besides our fusion design, our second contribution is a multi-modal progressive training strategy, where learning transitions from edges derived from RGB images to edges derived from predicted depth images. This approach allows the system to bootstrap learning from the rich textures contained in RGB images, and then switch to more relevant geometric content in depth maps, while it eliminates the need for ground-truth depth at training time. Together, these contributions highlight edge-guided fusion as a robust approach capable of improving transparent object perception.