🤖 AI Summary
This work addresses the problem of self-supervised segmentation of drivable vehicle trajectories from monocular images in complex urban scenes, without requiring manual annotations or explicit road modeling. Leveraging large-scale driving videos, the method employs structure-from-motion (SfM) to recover ego-vehicle trajectories and projects them onto the ground plane to generate traversable region masks, which serve as self-supervisory signals for training a deep segmentation network. This enables the prediction of motion-conditioned drivable paths from a single RGB image. The key contribution lies in the first demonstration of annotation-free trajectory segmentation via ego-motion distillation, implicitly capturing scene layout and intersection structures while supporting cross-camera and cross-platform transfer. The approach is validated on NuScenes and successfully adapted—through lightweight fine-tuning—to an electric scooter platform, demonstrating strong generalization and practical applicability.
📝 Abstract
We present a scalable self-supervised approach for segmenting feasible vehicle trajectories from monocular images for autonomous driving in complex urban environments. Leveraging large-scale dashcam videos, we treat recorded ego-vehicle motion as implicit supervision and recover camera trajectories via monocular structure-from-motion, projecting them onto the ground plane to generate spatial masks of traversed regions without manual annotation. These automatically generated labels are used to train a deep segmentation network that predicts motion-conditioned path proposals from a single RGB image at run time, without explicit modeling of road or lane markings. Trained on diverse, unconstrained internet data, the model implicitly captures scene layout, lane topology, and intersection structure, and generalizes across varying camera configurations. We evaluate our approach on NuScenes, demonstrating reliable trajectory prediction, and further show transfer to an electric scooter platform through light fine-tuning. Our results indicate that large-scale ego-motion distillation yields structured and generalizable path proposals beyond the demonstrated trajectory, enabling trajectory hypothesis estimation via image segmentation.