🤖 AI Summary
This work challenges the conventional assumption that visual encoders serve solely as “feature extractors” in end-to-end visuomotor policies, investigating whether they actively participate in decision-making. We propose the **Visual Alignment Testing** framework—a novel empirical methodology—to assess the intrinsic decision-making capacity of visual encoders. Our experiments demonstrate, for the first time, that end-to-end trained visual encoders possess significant task-relevant decision-making capability; however, this capacity degrades markedly when encoders are initialized with out-of-distribution (OOD) pretraining, leading to an average 42% drop in motor control performance. These findings reveal that under motor supervision, visual encoders actively learn task semantics and jointly fulfill both feature representation and decision-making roles. The results establish a new paradigm for designing task-conditioned, context-aware visual encoders and provide foundational theoretical support for rethinking encoder architecture and training objectives in embodied AI.
📝 Abstract
An end-to-end (E2E) visuomotor policy is typically treated as a unified whole, but recent approaches using out-of-domain (OOD) data to pretrain the visual encoder have cleanly separated the visual encoder from the network, with the remainder referred to as the policy. We propose Visual Alignment Testing, an experimental framework designed to evaluate the validity of this functional separation. Our results indicate that in E2E-trained models, visual encoders actively contribute to decision-making resulting from motor data supervision, contradicting the assumed functional separation. In contrast, OOD-pretrained models, where encoders lack this capability, experience an average performance drop of 42% in our benchmark results, compared to the state-of-the-art performance achieved by E2E policies. We believe this initial exploration of visual encoders' role can provide a first step towards guiding future pretraining methods to address their decision-making ability, such as developing task-conditioned or context-aware encoders.