🤖 AI Summary
Autonomous vehicles operating in unstructured, high-risk battlefield environments suffer from a critical lack of semantic segmentation models capable of identifying navigable regions. Method: This work introduces NAVBATTLE, the first navigation-oriented semantic segmentation benchmark specifically designed for combat scenarios—bridging the data gap between urban driving and extreme operational environments. Leveraging open-source DATTALION imagery, we construct a heterogeneous battlefield dataset and propose a zero-shot cross-domain adaptation framework that requires no target-domain annotations. We systematically evaluate the generalization performance of state-of-the-art segmentation models under domain shift. Contributions: (1) Release of the first battlefield navigable-region segmentation benchmark; (2) Empirical characterization of performance degradation patterns of mainstream models in extreme environments; (3) Validation of effective low-label cross-domain transfer strategies. Results provide foundational data and methodological insights for developing robust navigation algorithms in high-risk scenarios.
📝 Abstract
We introduce WarNav, a novel real-world dataset constructed from images of the open-source DATTALION repository, specifically tailored to enable the development and benchmarking of semantic segmentation models for autonomous ground vehicle navigation in unstructured, conflict-affected environments. This dataset addresses a critical gap between conventional urban driving resources and the unique operational scenarios encountered by unmanned systems in hazardous and damaged war-zones. We detail the methodological challenges encountered, ranging from data heterogeneity to ethical considerations, providing guidance for future efforts that target extreme operational contexts. To establish performance references, we report baseline results on WarNav using several state-of-the-art semantic segmentation models trained on structured urban scenes. We further analyse the impact of training data environments and propose a first step towards effective navigability in challenging environments with the constraint of having no annotation of the targeted images. Our goal is to foster impactful research that enhances the robustness and safety of autonomous vehicles in high-risk scenarios while being frugal in annotated data.