🤖 AI Summary
This work addresses the lack of generalizability in indoor layout estimation and the longstanding trade-off among accuracy, efficiency, and end-to-end trainability. We propose the first Transformer-based unified framework enabling fully end-to-end training and high-speed inference. Key innovations include a layout degradation strategy to enhance data diversity while preserving Manhattan constraints; a differentiable geometric loss that jointly optimizes planar consistency and boundary sharpness—eliminating post-processing; and the integration of OneFormer-style task-conditioned queries, contrastive learning, and topology-aware transformation. Our method achieves state-of-the-art performance on LSUN, Hedau, and Matterport3D-Layout datasets, with pixel-wise layout errors of 5.43%, 7.04%, and 4.03%, respectively, while maintaining an inference latency of only 114 ms. This substantial improvement in both accuracy and speed significantly enhances practical utility for augmented reality interaction and large-scale 3D reconstruction.
📝 Abstract
We present Layout Anything, a transformer-based framework for indoor layout estimation that adapts the OneFormer's universal segmentation architecture to geometric structure prediction. Our approach integrates OneFormer's task-conditioned queries and contrastive learning with two key modules: (1) a layout degeneration strategy that augments training data while preserving Manhattan-world constraints through topology-aware transformations, and (2) differentiable geometric losses that directly enforce planar consistency and sharp boundary predictions during training. By unifying these components in an end-to-end framework, the model eliminates complex post-processing pipelines while achieving high-speed inference at 114ms. Extensive experiments demonstrate state-of-the-art performance across standard benchmarks, with pixel error (PE) of 5.43% and corner error (CE) of 4.02% on the LSUN, PE of 7.04% (CE 5.17%) on the Hedau and PE of 4.03% (CE 3.15%) on the Matterport3D-Layout datasets. The framework's combination of geometric awareness and computational efficiency makes it particularly suitable for augmented reality applications and large-scale 3D scene reconstruction tasks.