Zero-Shot Off-Policy Learning

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of off-policy learning in zero-shot reinforcement learning, where distributional shift and value overestimation hinder performance. The authors propose a training-free method that enables rapid adaptation to new tasks at test time. The key innovation lies in establishing, for the first time, a theoretical connection between successor measures and stationary density ratios, which leads to a derived optimal importance sampling ratio for immediate correction of the stationary distribution under any new task. Integrated within a forward–backward representation framework, the approach demonstrates strong empirical effectiveness across diverse benchmarks—including SMPL Humanoid motion tracking, ExoRL continuous control, and OGBench long-horizon tasks—significantly enhancing task adaptability in training-free settings.

Technology Category

Application Category

📝 Abstract
Off-policy learning methods seek to derive an optimal policy directly from a fixed dataset of prior interactions. This objective presents significant challenges, primarily due to the inherent distributional shift and value function overestimation bias. These issues become even more noticeable in zero-shot reinforcement learning, where an agent trained on reward-free data must adapt to new tasks at test time without additional training. In this work, we address the off-policy problem in a zero-shot setting by discovering a theoretical connection of successor measures to stationary density ratios. Using this insight, our algorithm can infer optimal importance sampling ratios, effectively performing a stationary distribution correction with an optimal policy for any task on the fly. We benchmark our method in motion tracking tasks on SMPL Humanoid, continuous control on ExoRL, and for the long-horizon OGBench tasks. Our technique seamlessly integrates into forward-backward representation frameworks and enables fast-adaptation to new tasks in a training-free regime. More broadly, this work bridges off-policy learning and zero-shot adaptation, offering benefits to both research areas.
Problem

Research questions and friction points this paper is trying to address.

zero-shot reinforcement learning
off-policy learning
distributional shift
value overestimation
reward-free data
Innovation

Methods, ideas, or system contributions that make the work stand out.

zero-shot reinforcement learning
off-policy learning
successor measures
stationary density ratio
importance sampling
🔎 Similar Papers
No similar papers found.