🤖 AI Summary
This work investigates whether a single representation can enable zero-shot optimal control under arbitrary reward functions without downstream fine-tuning. To this end, we analyze and simplify the forward-backward (FB) representation learning framework, proposing an unsupervised pretraining method—termed one-step FB—that requires only a single policy improvement step. We theoretically characterize its capability boundary, showing that it supports one-step policy improvement but not necessarily global optimality. Our approach integrates rank matching, Q-function fitting, and contraction mapping theory to construct an efficient and general-purpose representation. Evaluated across ten continuous control tasks, one-step FB reduces convergence error by up to 10⁵-fold and improves average zero-shot performance by 24%.
📝 Abstract
As machine learning has moved towards leveraging large models as priors for downstream tasks, the community has debated the right form of prior for solving reinforcement learning (RL) problems. If one were to try to prefetch as much computation as possible, they would attempt to learn a prior over the policies for some yet-to-be-determined reward function. Recent work (forward-backward (FB) representation learning) has tried this, arguing that an unsupervised representation learning procedure can enable optimal control over arbitrary rewards without further fine-tuning. However, FB's training objective and learning behavior remain mysterious. In this paper, we demystify FB by clarifying when such representations can exist, what its objective optimizes, and how it converges in practice. We draw connections with rank matching, fitted Q-evaluation, and contraction mapping. Our analysis suggests a simplified unsupervised pre-training method for RL that, instead of enabling optimal control, performs one step of policy improvement. We call our proposed method $\textbf{one-step forward-backward representation learning (one-step FB)}$. Experiments in didactic settings, as well as in $10$ state-based and image-based continuous control domains, demonstrate that one-step FB converges to errors $10^5$ smaller and improves zero-shot performance by $+24\%$ on average. Our project website is available at https://chongyi-zheng.github.io/onestep-fb.