🤖 AI Summary
Offline model-based reinforcement learning (MBRL) often suffers from accurate model predictions but poor policy performance, primarily due to confounding in offline datasets causing objective mismatch between model learning and policy optimization.
Method: We identify confounding as the root cause of this mismatch and propose BECAUSE—a bilinear causal representation framework that achieves causal disentanglement to construct joint state-action bilinear representations, jointly optimizing model prediction and policy learning to mitigate distributional shift. We provide theoretical guarantees showing tighter error bounds and improved sample efficiency.
Contribution/Results: BECAUSE significantly outperforms state-of-the-art methods across 18 heterogeneous offline RL benchmarks. It demonstrates robustness under sparse-data and high-confounding regimes, establishing the first provably sound and generalizable paradigm for causally grounded offline MBRL.
📝 Abstract
Offline model-based reinforcement learning (MBRL) enhances data efficiency by utilizing pre-collected datasets to learn models and policies, especially in scenarios where exploration is costly or infeasible. Nevertheless, its performance often suffers from the objective mismatch between model and policy learning, resulting in inferior performance despite accurate model predictions. This paper first identifies the primary source of this mismatch comes from the underlying confounders present in offline data for MBRL. Subsequently, we introduce extbf{B}ilin extbf{E}ar extbf{CAUS}al r extbf{E}presentation~(BECAUSE), an algorithm to capture causal representation for both states and actions to reduce the influence of the distribution shift, thus mitigating the objective mismatch problem. Comprehensive evaluations on 18 tasks that vary in data quality and environment context demonstrate the superior performance of BECAUSE over existing offline RL algorithms. We show the generalizability and robustness of BECAUSE under fewer samples or larger numbers of confounders. Additionally, we offer theoretical analysis of BECAUSE to prove its error bound and sample efficiency when integrating causal representation into offline MBRL.