SGFormer: Spherical Geometry Transformer for 360 Depth Estimation

📅 2024-04-23
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address severe polar distortion, structural degradation, and the difficulty of jointly modeling global and local geometry in 360° panoramic depth estimation, this paper proposes a Vision Transformer architecture incorporating spherical geometric priors. Our method introduces two key innovations: (1) the Spherical Prior Decoder (SPDecoder), the first of its kind, which explicitly encodes spherical isometry, continuity, and curvature awareness via bipolar reprojection, equivariant ring-based rotation embeddings, and geodesic-aware local distance encoding; and (2) a query-driven multi-scale conditional positional encoding scheme to enhance geometric awareness. Evaluated on multiple mainstream 360° depth datasets, our approach achieves significant improvements over state-of-the-art methods—particularly reducing polar-region depth error by a large margin—while markedly enhancing structural integrity and boundary sharpness of predicted depth maps.

Technology Category

Application Category

📝 Abstract
Panoramic distortion poses a significant challenge in 360 depth estimation, particularly pronounced at the north and south poles. Existing methods either adopt a bi-projection fusion strategy to remove distortions or model long-range dependencies to capture global structures, which can result in either unclear structure or insufficient local perception. In this paper, we propose a spherical geometry transformer, named SGFormer, to address the above issues, with an innovative step to integrate spherical geometric priors into vision transformers. To this end, we retarget the transformer decoder to a spherical prior decoder (termed SPDecoder), which endeavors to uphold the integrity of spherical structures during decoding. Concretely, we leverage bipolar re-projection, circular rotation, and curve local embedding to preserve the spherical characteristics of equidistortion, continuity, and surface distance, respectively. Furthermore, we present a query-based global conditional position embedding to compensate for spatial structure at varying resolutions. It not only boosts the global perception of spatial position but also sharpens the depth structure across different patches. Finally, we conduct extensive experiments on popular benchmarks, demonstrating our superiority over state-of-the-art solutions.
Problem

Research questions and friction points this paper is trying to address.

Addresses panoramic distortion in 360 depth estimation
Integrates spherical geometric priors into vision transformers
Enhances global perception and local depth structure accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spherical geometry transformer integration
Bipolar re-projection technique
Query-based global conditional embedding
🔎 Similar Papers
No similar papers found.
Junsong Zhang
Junsong Zhang
State Key Laboratory of Metastable Materials Science and Technology, Yanshan University
martensitic transformationshape memory alloycomposite
Z
Zisong Chen
Beijing Jiaotong University, China
C
Chunyu Lin
Beijing Jiaotong University, China
L
Lang Nie
Beijing Jiaotong University, China
Zhijie Shen
Zhijie Shen
Beijing Jiaotong University
Computer Vision
Y
Yao Zhao
Beijing Jiaotong University, China