🤖 AI Summary
To address three critical challenges—limited scalability in unknown extraterrestrial environments, difficulty in cross-domain knowledge transfer, and weak multimodal perception fusion—this paper introduces the first vision-language collaborative modeling framework tailored for planetary surface exploration. Methodologically, we construct a fine-grained synthetic vision-language dataset simulating Martian surface scenes; propose a co-finetuning strategy jointly optimizing a language backbone and a vision-language adapter, achieving effective mitigation of catastrophic forgetting using only 20% of pretraining data; and integrate programmatic data augmentation and zero-shot transfer into the LLaVA-13B architecture. Experimental results demonstrate that our model achieves significant zero-shot performance gains over existing state-of-the-art methods on unseen extraterrestrial tasks, establishing a scalable, transferable, and multimodally coordinated AI foundation model for autonomous space robots.
📝 Abstract
Foundation Models (FMs), e.g., large language models, possess attributes of intelligence which offer promise to endow a robot with the contextual understanding necessary to navigate complex, unstructured tasks in the wild. We see three core challenges in the future of space robotics that motivate building an FM for the space robotics community: 1) Scalability of ground-in-the-loop operations; 2) Generalizing prior knowledge to novel environments; and 3) Multi-modality in tasks and sensor data. As a first-step towards a space foundation model, we programmatically augment three extraterrestrial databases with fine-grained language annotations inspired by the sensory reasoning necessary to e.g., identify a site of scientific interest on Mars, building a synthetic dataset of visual-question-answer and visual instruction-following tuples. We fine-tune a pre-trained LLaVA 13B checkpoint on our augmented dataset to adapt a Vision-Language Model (VLM) to the visual semantic features in an extraterrestrial environment, demonstrating FMs as a tool for specialization and enhancing a VLM's zero-shot performance on unseen task types in comparison to state-of-the-art VLMs. Ablation studies show that fine-tuning the language backbone and vision-language adapter in concert is key to facilitate adaption while a small percentage, e.g., 20%, of the pre-training data can be used to safeguard against catastrophic forgetting.