Differentiable Composite Neural Signed Distance Fields for Robot Navigation in Dynamic Indoor Environments

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of retraining neural signed distance fields (SDFs) and enabling real-time navigation in dynamic indoor environments, this paper proposes a differentiable compositional SDF framework relying solely on onboard RGB-D sensors. Methodologically, it introduces a novel two-stage trajectory optimization: Stage I queries the robot’s body SDF along the predicted trajectory for efficient local obstacle avoidance; Stage II dynamically aligns and fuses SDFs of visible scene components to generate high-fidelity collision costs and gradients. Key technical contributions include differentiable SDF modeling, point-cloud-driven online reconstruction, and SDF composition. Evaluated in iGibson 2.0, the method achieves a 98% navigation success rate—14.4% higher than baseline approaches—while maintaining comparable planning latency. Furthermore, it has been successfully deployed in real-world dynamic indoor settings.

Technology Category

Application Category

📝 Abstract
Neural Signed Distance Fields (SDFs) provide a differentiable environment representation to readily obtain collision checks and well-defined gradients for robot navigation tasks. However, updating neural SDFs as the scene evolves entails re-training, which is tedious, time consuming, and inefficient, making it unsuitable for robot navigation with limited field-of-view in dynamic environments. Towards this objective, we propose a compositional framework of neural SDFs to solve robot navigation in indoor environments using only an onboard RGB-D sensor. Our framework embodies a dual mode procedure for trajectory optimization, with different modes using complementary methods of modeling collision costs and collision avoidance gradients. The primary stage queries the robot body's SDF, swept along the route to goal, at the obstacle point cloud, enabling swift local optimization of trajectories. The secondary stage infers the visible scene's SDF by aligning and composing the SDF representations of its constituents, providing better informed costs and gradients for trajectory optimization. The dual mode procedure combines the best of both stages, achieving a success rate of 98%, 14.4% higher than baseline with comparable amortized plan time on iGibson 2.0. We also demonstrate its effectiveness in adapting to real-world indoor scenarios.
Problem

Research questions and friction points this paper is trying to address.

Dynamic indoor robot navigation
Efficient neural SDF updates
Onboard RGB-D sensor usage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable Composite Neural SDFs
Dual Mode Trajectory Optimization
Onboard RGB-D Sensor Integration