🤖 AI Summary
This work addresses the common limitation of existing dense prediction methods that neglect scene structure, often resulting in inconsistencies between geometry and semantics. To overcome this, the authors propose the SHED architecture, which integrates an explicit segmentation mechanism within an encoder-decoder framework to enable structure-aware dense prediction without requiring segmentation annotations. The key innovation lies in the first-time use of hierarchical segment tokens to model structural priors, combined with token pooling/unpooling, geometric prior embedding, and end-to-end self-supervised training. This approach significantly enhances depth boundary sharpness, semantic segmentation accuracy, and 3D reconstruction quality, while also producing interpretable, part-level structural representations.
📝 Abstract
Dense prediction infers per-pixel values from a single image and is fundamental to 3D perception and robotics. Although real-world scenes exhibit strong structure, existing methods treat it as an independent pixel-wise prediction, often resulting in structural inconsistencies. We propose SHED, a novel encoder-decoder architecture that enforces geometric prior explicitly by incorporating segmentation into dense prediction. By bidirectional hierarchical reasoning, segment tokens are hierarchically pooled in the encoder and unpooled in the decoder to reverse the hierarchy. The model is supervised only at the final output, allowing the segment hierarchy to emerge without explicit segmentation supervision. SHED improves depth boundary sharpness and segment coherence, while demonstrating strong cross-domain generalization from synthetic to the real-world environments. Its hierarchy-aware decoder better captures global 3D scene layouts, leading to improved semantic segmentation performance. Moreover, SHED enhances 3D reconstruction quality and reveals interpretable part-level structures that are often missed by conventional pixel-wise methods.