🤖 AI Summary
This work addresses the poor generalization and training instability in vision-and-language navigation caused by tight coupling between perception and control. The authors propose a modular architecture that decouples these components: visual observations are encoded using a frozen vision-language model, and a lightweight adapter aligns these features into the latent space of a pretrained expert policy. This reformulates end-to-end policy learning as a supervised latent-space alignment task. The approach enables cross-modal and cross-environment reuse of expert behaviors, achieving strong in-distribution performance on indoor navigation tasks while demonstrating remarkable zero-shot generalization to unseen environments, lighting conditions, and viewpoints. Moreover, it incurs minimal inference overhead.
📝 Abstract
We propose LCLA (Language-Conditioned Latent Alignment), a framework for vision-language navigation that learns modular perception-action interfaces by aligning sensory observations to a latent representation of an expert policy. The expert is first trained with privileged state information, inducing a latent space sufficient for control, after which its latent interface and action head are frozen. A lightweight adapter is then trained to map raw visual-language observations, via a frozen vision-language model, into the expert's latent space, reducing the problem of visuomotor learning to supervised latent alignment rather than end-to-end policy optimization. This decoupling enforces a stable contract between perception and control, enabling expert behavior to be reused across sensing modalities and environmental variations. We instantiate LCLA and evaluate it on a vision-language indoor navigation task, where aligned latent spaces yield strong in-distribution performance and robust zero-shot generalization to unseen environments, lighting conditions, and viewpoints while remaining lightweight at inference time.