🤖 AI Summary
Behavior Foundation Models (BFMs) suffer from skill embedding mismatch and inference bias in zero-shot reinforcement learning due to unsupervised pretraining, limiting downstream performance.
Method: We propose a lossless, fast online adaptation method that requires no parameter fine-tuning. Our approach first identifies an implicitly superior subset of pre-trained skills and constructs a low-dimensional task embedding space. It then introduces a dual-path adaptation mechanism—comprising both actor-critic and actor-only branches—to mitigate policy “forgetting” during early adaptation. Adaptation completes within tens of environment interactions.
Contribution/Results: Evaluated on navigation and motor control tasks, our method improves over zero-shot baselines by 10–40%, significantly outperforming existing adaptation approaches. It achieves both high efficiency and stability in zero-shot policy enhancement, marking the first work to exploit latent skill subsets for rapid, parameter-free online adaptation of BFMs.
📝 Abstract
Unsupervised zero-shot reinforcement learning (RL) has emerged as a powerful paradigm for pretraining behavioral foundation models (BFMs), enabling agents to solve a wide range of downstream tasks specified via reward functions in a zero-shot fashion, i.e., without additional test-time learning or planning. This is achieved by learning self-supervised task embeddings alongside corresponding near-optimal behaviors and incorporating an inference procedure to directly retrieve the latent task embedding and associated policy for any given reward function. Despite promising results, zero-shot policies are often suboptimal due to errors induced by the unsupervised training process, the embedding, and the inference procedure. In this paper, we focus on devising fast adaptation strategies to improve the zero-shot performance of BFMs in a few steps of online interaction with the environment while avoiding any performance drop during the adaptation process. Notably, we demonstrate that existing BFMs learn a set of skills containing more performant policies than those identified by their inference procedure, making them well-suited for fast adaptation. Motivated by this observation, we propose both actor-critic and actor-only fast adaptation strategies that search in the low-dimensional task-embedding space of the pre-trained BFM to rapidly improve the performance of its zero-shot policies on any downstream task. Notably, our approach mitigates the initial"unlearning"phase commonly observed when fine-tuning pre-trained RL models. We evaluate our fast adaptation strategies on top of four state-of-the-art zero-shot RL methods in multiple navigation and locomotion domains. Our results show that they achieve 10-40% improvement over their zero-shot performance in a few tens of episodes, outperforming existing baselines.