🤖 AI Summary
This work addresses navigation and exploration for autonomous mobile robots by proposing the Scene-level Directed Signed Distance Function (SDDF)—a differentiable geometric representation that maps a 3D position and viewing direction to the signed distance along that direction to the nearest surface. To overcome the large distance discontinuities and unstable gradients of conventional implicit functions near obstacle boundaries, SDDF uniquely integrates explicit ellipsoidal geometric priors with implicit neural residual modeling, achieving both boundary robustness and high-fidelity reconstruction. The method enables end-to-end differentiable view prediction, balancing rendering efficiency and reconstruction accuracy. Evaluated on standard benchmarks, it matches the performance of state-of-the-art neural implicit models in reconstruction quality while significantly improving convergence and practicality in trajectory optimization.
📝 Abstract
Dense geometric environment representations are critical for autonomous mobile robot navigation and exploration. Recent work shows that implicit continuous representations of occupancy, signed distance, or radiance learned using neural networks offer advantages in reconstruction fidelity, efficiency, and differentiability over explicit discrete representations based on meshes, point clouds, and voxels. In this work, we explore a directional formulation of signed distance, called signed directional distance function (SDDF). Unlike signed distance function (SDF) and similar to neural radiance fields (NeRF), SDDF has a position and viewing direction as input. Like SDF and unlike NeRF, SDDF directly provides distance to the observed surface along the direction, rather than integrating along the view ray, allowing efficient view synthesis. To learn and predict scene-level SDDF efficiently, we develop a differentiable hybrid representation that combines explicit ellipsoid priors and implicit neural residuals. This approach allows the model to effectively handle large distance discontinuities around obstacle boundaries while preserving the ability for dense high-fidelity prediction. We show that SDDF is competitive with the state-of-the-art neural implicit scene models in terms of reconstruction accuracy and rendering efficiency, while allowing differentiable view prediction for robot trajectory optimization.