🤖 AI Summary
Indoor monocular semantic scene completion remains highly challenging due to complex spatial layouts and severe occlusions. Existing Transformer-based approaches often suffer from excessive memory consumption and limited capacity for fine-grained reconstruction. To address these limitations, this work proposes AdaSFormer, an adaptive serialized Transformer framework that effectively integrates global contextual dependencies with local geometric details. The architecture introduces learnable offset-based adaptive receptive fields, center-relative positional encoding, and convolution-modulated layer normalization to enhance both efficiency and reconstruction fidelity. Evaluated on the NYUv2 and Occ-ScanNet benchmarks, AdaSFormer achieves state-of-the-art performance, significantly outperforming current methods in semantic scene completion accuracy and detail preservation.
📝 Abstract
Indoor monocular semantic scene completion (MSSC) is notably more challenging than its outdoor counterpart due to complex spatial layouts and severe occlusions. While transformers are well suited for modeling global dependencies, their high memory cost and difficulty in reconstructing fine-grained details have limited their use in indoor MSSC. To address these limitations, we introduce AdaSFormer, a serialized transformer framework tailored for indoor MSSC. Our model features three key designs: (1) an Adaptive Serialized Transformer with learnable shifts that dynamically adjust receptive fields; (2) a Center-Relative Positional Encoding that captures spatial information richness; and (3) a Convolution-Modulated Layer Normalization that bridges heterogeneous representations between convolutional and transformer features. Extensive experiments on NYUv2 and Occ-ScanNet demonstrate that AdaSFormer achieves state-of-the-art performance. The code is publicly available at: https://github.com/alanWXZ/AdaSFormer.