🤖 AI Summary
This paper addresses three key challenges in satellite image–driven autonomous visual search in野外: insufficient target representation, cross-modal semantic misalignment between remote sensing and natural images, and localization errors induced by large-model hallucinations. To this end, we propose an uncertainty-aware test-time adaptation (TTA) framework. Methodologically: (1) we introduce a spatial Poisson point process–inspired uncertainty-weighted TTA mechanism to dynamically calibrate CLIP predictions online; (2) we design a pretraining paradigm that explicitly aligns a dedicated remote sensing image encoder with the CLIP visual encoder to mitigate modality gaps. Evaluated on our newly constructed ecological visual search dataset, our planner achieves a 9.7% performance gain over baselines, matching state-of-the-art vision-language models. Furthermore, the framework has been successfully deployed in a UAV hardware-in-the-loop simulation system.
📝 Abstract
To perform autonomous visual search for environmental monitoring, a robot may leverage satellite imagery as a prior map. This can help inform coarse, high-level search and exploration strategies, even when such images lack sufficient resolution to allow fine-grained, explicit visual recognition of targets. However, there are some challenges to overcome with using satellite images to direct visual search. For one, targets that are unseen in satellite images are underrepresented (compared to ground images) in most existing datasets, and thus vision models trained on these datasets fail to reason effectively based on indirect visual cues. Furthermore, approaches which leverage large Vision Language Models (VLMs) for generalization may yield inaccurate outputs due to hallucination, leading to inefficient search. To address these challenges, we introduce Search-TTA, a multimodal test-time adaptation framework that can accept text and/or image input. First, we pretrain a remote sensing image encoder to align with CLIP's visual encoder to output probability distributions of target presence used for visual search. Second, our framework dynamically refines CLIP's predictions during search using a test-time adaptation mechanism. Through a feedback loop inspired by Spatial Poisson Point Processes, gradient updates (weighted by uncertainty) are used to correct (potentially inaccurate) predictions and improve search performance. To validate Search-TTA's performance, we curate a visual search dataset based on internet-scale ecological data. We find that Search-TTA improves planner performance by up to 9.7%, particularly in cases with poor initial CLIP predictions. It also achieves comparable performance to state-of-the-art VLMs. Finally, we deploy Search-TTA on a real UAV via hardware-in-the-loop testing, by simulating its operation within a large-scale simulation that provides onboard sensing.