LCLA: Language-Conditioned Latent Alignment for Vision-Language Navigation

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor generalization and training instability in vision-and-language navigation caused by tight coupling between perception and control. The authors propose a modular architecture that decouples these components: visual observations are encoded using a frozen vision-language model, and a lightweight adapter aligns these features into the latent space of a pretrained expert policy. This reformulates end-to-end policy learning as a supervised latent-space alignment task. The approach enables cross-modal and cross-environment reuse of expert behaviors, achieving strong in-distribution performance on indoor navigation tasks while demonstrating remarkable zero-shot generalization to unseen environments, lighting conditions, and viewpoints. Moreover, it incurs minimal inference overhead.

Technology Category

Application Category

📝 Abstract
We propose LCLA (Language-Conditioned Latent Alignment), a framework for vision-language navigation that learns modular perception-action interfaces by aligning sensory observations to a latent representation of an expert policy. The expert is first trained with privileged state information, inducing a latent space sufficient for control, after which its latent interface and action head are frozen. A lightweight adapter is then trained to map raw visual-language observations, via a frozen vision-language model, into the expert's latent space, reducing the problem of visuomotor learning to supervised latent alignment rather than end-to-end policy optimization. This decoupling enforces a stable contract between perception and control, enabling expert behavior to be reused across sensing modalities and environmental variations. We instantiate LCLA and evaluate it on a vision-language indoor navigation task, where aligned latent spaces yield strong in-distribution performance and robust zero-shot generalization to unseen environments, lighting conditions, and viewpoints while remaining lightweight at inference time.
Problem

Research questions and friction points this paper is trying to address.

vision-language navigation
perception-control decoupling
zero-shot generalization
latent alignment
modular interfaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

latent alignment
vision-language navigation
modular perception-action interface
zero-shot generalization
expert policy distillation
🔎 Similar Papers
No similar papers found.
N
Nitesh Subedi
Iowa State University
A
Adam Haroon
Iowa State University
S
Samuel Tetteh
Iowa State University
P
Prajwal Koirala
Cornell University
Cody Fleming
Cody Fleming
Iowa State University
AutonomyControl TheorySystem SafetyMachine LearningCyber-physical Systems
Soumik Sarkar
Soumik Sarkar
Director, Translational AI Center, Professor, Iowa State University
Machine learningCyber-physical systems