🤖 AI Summary
Existing open-vocabulary semantic segmentation (OVSS) methods exhibit limited generalization across viewpoints and modalities, and lack a unified, realistic evaluation benchmark. Method: We introduce the first aerial-ground multi-view (aerial + ground) and RGB-thermal cross-modal open-vocabulary segmentation benchmark, enabling zero-shot cross-view and cross-sensor generalization evaluation. We propose a systematic robustness evaluation framework that quantitatively disentangles the impacts of viewpoint disparity, modality shift, and text-vision alignment on zero-shot transfer. Leveraging foundation models (e.g., CLIP), our protocol integrates registered multi-source imagery and fine-grained annotations, ensuring reproducibility and extensibility. Contribution/Results: Comprehensive experiments expose critical performance bottlenecks of state-of-the-art OVSS models, establishing the first deployment-oriented, real-world-scenario benchmark for open-vocabulary segmentation—specifically designed to advance embodied intelligent perception.
📝 Abstract
Open-vocabulary semantic segmentation (OVSS) involves assigning labels to each pixel in an image based on textual descriptions, leveraging world models like CLIP. However, they encounter significant challenges in cross-domain generalization, hindering their practical efficacy in real-world applications. Embodied AI systems are transforming autonomous navigation for ground vehicles and drones by enhancing their perception abilities, and in this study, we present AetherVision-Bench, a benchmark for multi-angle segmentation across aerial, and ground perspectives, which facilitates an extensive evaluation of performance across different viewing angles and sensor modalities. We assess state-of-the-art OVSS models on the proposed benchmark and investigate the key factors that impact the performance of zero-shot transfer models. Our work pioneers the creation of a robustness benchmark, offering valuable insights and establishing a foundation for future research.