🤖 AI Summary
This work addresses the challenge of efficiently evaluating the impact of visual representations on robotic control policies without resorting to costly policy deployment tests. It proposes using the decoding accuracy of environment states—such as geometry, object structure, and physical properties—as a proxy metric. Specifically, pretrained visual encoders are probed in simulation using ground-truth state data to assess their ability to reconstruct latent environmental states. This proxy metric exhibits strong correlation with downstream control performance across diverse environments and learning setups, significantly outperforming existing evaluation methods. The results underscore the critical role of encoding latent physical states in enabling policy generalization and provide a reliable, efficient criterion for selecting visual representations in robotic applications.
📝 Abstract
The choice of visual representation is key to scaling generalist robot policies. However, direct evaluation via policy rollouts is expensive, even in simulation. Existing proxy metrics focus on the representation's capacity to capture narrow aspects of the visual world, like object shape, limiting generalization across environments. In this paper, we take an analytical perspective: we probe pretrained visual encoders by measuring how well they support decoding of environment state -- including geometry, object structure, and physical attributes -- from images. Leveraging simulation environments with access to ground-truth state, we show that this probing accuracy strongly correlates with downstream policy performance across diverse environments and learning settings, significantly outperforming prior metrics and enabling efficient representation selection. More broadly, our study provides insight into the representational properties that support generalizable manipulation, suggesting that learning to encode the latent physical state of the environment is a promising objective for control.