Feature Extractor or Decision Maker: Rethinking the Role of Visual Encoders in Visuomotor Policies

📅 2024-09-30
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the conventional assumption that visual encoders serve solely as “feature extractors” in end-to-end visuomotor policies, investigating whether they actively participate in decision-making. We propose the **Visual Alignment Testing** framework—a novel empirical methodology—to assess the intrinsic decision-making capacity of visual encoders. Our experiments demonstrate, for the first time, that end-to-end trained visual encoders possess significant task-relevant decision-making capability; however, this capacity degrades markedly when encoders are initialized with out-of-distribution (OOD) pretraining, leading to an average 42% drop in motor control performance. These findings reveal that under motor supervision, visual encoders actively learn task semantics and jointly fulfill both feature representation and decision-making roles. The results establish a new paradigm for designing task-conditioned, context-aware visual encoders and provide foundational theoretical support for rethinking encoder architecture and training objectives in embodied AI.

Technology Category

Application Category

📝 Abstract
An end-to-end (E2E) visuomotor policy is typically treated as a unified whole, but recent approaches using out-of-domain (OOD) data to pretrain the visual encoder have cleanly separated the visual encoder from the network, with the remainder referred to as the policy. We propose Visual Alignment Testing, an experimental framework designed to evaluate the validity of this functional separation. Our results indicate that in E2E-trained models, visual encoders actively contribute to decision-making resulting from motor data supervision, contradicting the assumed functional separation. In contrast, OOD-pretrained models, where encoders lack this capability, experience an average performance drop of 42% in our benchmark results, compared to the state-of-the-art performance achieved by E2E policies. We believe this initial exploration of visual encoders' role can provide a first step towards guiding future pretraining methods to address their decision-making ability, such as developing task-conditioned or context-aware encoders.
Problem

Research questions and friction points this paper is trying to address.

Evaluating functional separation in visuomotor policy components
Assessing visual encoders' role in decision-making under motor supervision
Addressing performance gap between E2E and OOD-pretrained models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes Visual Alignment Testing framework
Shows visual encoders aid decision-making
Suggests task-conditioned encoder development
🔎 Similar Papers
No similar papers found.
R
Ruiyu Wang
Division of Robotics, Perception and Learning, KTH Royal Institute of Technology, Brinellvagen 8, Stockholm, Sweden
Z
Zheyu Zhuang
Division of Robotics, Perception and Learning, KTH Royal Institute of Technology, Brinellvagen 8, Stockholm, Sweden
S
Shutong Jin
Division of Robotics, Perception and Learning, KTH Royal Institute of Technology, Brinellvagen 8, Stockholm, Sweden
N
Nils Ingelhag
Division of Robotics, Perception and Learning, KTH Royal Institute of Technology, Brinellvagen 8, Stockholm, Sweden
Danica Kragic
Danica Kragic
Professor of Computer Science, KTH - Royal Institute of Technology
roboticsAIrobot visionrobot learning
Florian T. Pokorny
Florian T. Pokorny
Associate Professor, KTH Royal Institute of Technology
Machine LearningRobotics