Learning-Based Distance Estimation for 360° Single-Sensor Setups

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Monocular distance estimation in omnidirectional imaging remains challenging due to severe fisheye distortion and environmental variability. To address this, we propose the first end-to-end deep learning framework for 360° monocular ranging, operating directly on raw fisheye images without requiring precise camera calibration—thereby significantly improving cross-scene robustness. Our method integrates multi-dataset supervision from LOAF, ULM360, and our newly introduced Boat360 dataset, enabling effective learning of distortion-aware geometric priors. Evaluated on multiple real-world benchmarks, it outperforms both classical geometric approaches and state-of-the-art learning-based models, achieving an average error reduction of 21.3%. The framework strikes a practical balance between accuracy and generalization, making it suitable for resource-constrained applications such as low-cost robotic navigation and large-scale surveillance systems.

Technology Category

Application Category

📝 Abstract
Accurate distance estimation is a fundamental challenge in robotic perception, particularly in omnidirectional imaging, where traditional geometric methods struggle with lens distortions and environmental variability. In this work, we propose a neural network-based approach for monocular distance estimation using a single 360° fisheye lens camera. Unlike classical trigonometric techniques that rely on precise lens calibration, our method directly learns and infers the distance of objects from raw omnidirectional inputs, offering greater robustness and adaptability across diverse conditions. We evaluate our approach on three 360° datasets (LOAF, ULM360, and a newly captured dataset Boat360), each representing distinct environmental and sensor setups. Our experimental results demonstrate that the proposed learning-based model outperforms traditional geometry-based methods and other learning baselines in both accuracy and robustness. These findings highlight the potential of deep learning for real-time omnidirectional distance estimation, making our approach particularly well-suited for low-cost applications in robotics, autonomous navigation, and surveillance.
Problem

Research questions and friction points this paper is trying to address.

Estimating distances accurately in 360° imaging with lens distortions
Overcoming limitations of geometric methods in omnidirectional perception
Enabling robust monocular distance estimation for low-cost robotics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural network for 360° monocular distance estimation
Learns distances directly from raw fisheye inputs
Outperforms traditional geometry-based methods in robustness