🤖 AI Summary
This work addresses the limited interpretability and real-time controllability of existing vision–language–action (VLA) models, which hinder the direct transfer of interpretability techniques from large language models. The authors propose the first framework for feature observability and controllability in VLAs, leveraging linear probes to decode internal representations and applying lightweight linear interventions grounded in optimal control theory. This approach enables online behavioral steering without fine-tuning. Evaluated on π₀.₅ and OpenVLA, the method preserves closed-loop control capabilities while accurately aligning agent behavior with user intent, revealing that VLAs possess an inherently interpretable and dynamically controllable latent structure.
📝 Abstract
Vision-Language-Action Models (VLAs) have shown remarkable progress towards embodied intelligence. While their architecture partially resembles that of Large Language Models (LLMs), VLAs exhibit higher complexity due to their multi-modal inputs/outputs and often hybrid nature of transformer and diffusion heads. This is part of the reason why insights from mechanistic interpretability in LLMs, which explain how the internal model representations relate to their output behavior, do not trivially transfer to VLA counterparts. In this work, we propose to close this gap by introducing and analyzing two main concepts: feature-observability and feature-controllability. In particular, we first study features that are linearly encoded in representation space, and show how they can be observed by means of a linear classifier. Then, we use a minimal linear intervention grounded in optimal control to accurately place internal representations and steer the VLA's output towards a desired region. Our results show that targeted, lightweight interventions can reliably steer a robot's behavior while preserving closed-loop capabilities. We demonstrate on different VLA architectures ($\pi_{0.5}$ and OpenVLA) through simulation experiments that VLAs possess interpretable internal structure amenable to online adaptation without fine-tuning, enabling real-time alignment with user preferences and task requirements.