🤖 AI Summary
Existing vision-language model (VLM)-based traversability estimation methods rely on handcrafted prompts, exhibit poor generalization, and require coupling with external planners to generate trajectories. This work proposes SwarmDiffusion: a lightweight, end-to-end diffusion model that takes only a single RGB image as input and jointly outputs a traversability map and a smooth, kinematically feasible trajectory—enabling planner-free autonomous navigation. Its key innovations include: (1) a label-free, prompt-free trajectory generation pipeline that implicitly learns motion priors via random waypoint sampling and Bézier curve smoothing; and (2) a VLM-supervised conditional diffusion framework optimized jointly via differentiable rendering and geometric regularization, conditioned on compact robot morphology embeddings. Evaluated across diverse indoor/outdoor scenes and on quadrupedal and aerial robots, SwarmDiffusion achieves 80–100% navigation success rates, with 0.09 s per-frame inference latency and platform adaptation requiring only 500 annotated frames.
📝 Abstract
Visual traversability estimation is critical for autonomous navigation, but existing VLM-based methods rely on hand-crafted prompts, generalize poorly across embodiments, and output only traversability maps, leaving trajectory generation to slow external planners. We propose SwarmDiffusion, a lightweight end-to-end diffusion model that jointly predicts traversability and generates a feasible trajectory from a single RGB image. To remove the need for annotated or planner-produced paths, we introduce a planner-free trajectory construction pipeline based on randomized waypoint sampling, Bezier smoothing, and regularization enforcing connectivity, safety, directionality, and path thinness. This enables learning stable motion priors without demonstrations. SwarmDiffusion leverages VLM-derived supervision without prompt engineering and conditions the diffusion process on a compact embodiment state, producing physically consistent, traversable paths that transfer across different robot platforms. Across indoor environments and two embodiments (quadruped and aerial), the method achieves 80-100% navigation success and 0.09 s inference, and adapts to a new robot using only-500 additional visual samples. It generalizes reliably to unseen environments in simulation and real-world trials, offering a scalable, prompt-free approach to unified traversability reasoning and trajectory generation.