🤖 AI Summary
This work addresses the challenge of policy learning in off-policy dynamics transfer caused by mismatched dynamics between source and target domains. To mitigate this issue, the authors propose a local dynamics-aware domain adaptation mechanism that clusters transition data from both domains to estimate cluster-level dynamics discrepancies. Based on these estimates, the method selectively retains source-domain samples with small dynamics divergence while filtering out those with large discrepancies, enabling fine-grained and scalable data selection. In contrast to existing approaches that rely on global assumptions or per-sample filtering, the proposed strategy achieves a favorable balance between computational efficiency and adaptability. Experimental results demonstrate that the method significantly outperforms state-of-the-art techniques across multiple environments exhibiting either global or local dynamics shifts, leading to substantial improvements in policy performance.
📝 Abstract
Off-dynamics offline reinforcement learning (RL) aims to learn a policy for a target domain using limited target data and abundant source data collected under different transition dynamics. Existing methods typically address dynamics mismatch either globally over the state space or via pointwise data filtering; these approaches can miss localized cross-domain similarities or incur high computational cost. We propose Localized Dynamics-Aware Domain Adaptation (LoDADA), which exploits localized dynamics mismatch to better reuse source data. LoDADA clusters transitions from source and target datasets and estimates cluster-level dynamics discrepancy via domain discrimination. Source transitions from clusters with small discrepancy are retained, while those from clusters with large discrepancy are filtered out. This yields a fine-grained and scalable data selection strategy that avoids overly coarse global assumptions and expensive per-sample filtering. We provide theoretical insights and extensive experiments across environments with diverse global and local dynamics shifts. Results show that LoDADA consistently outperforms state-of-the-art off-dynamics offline RL methods by better leveraging localized distribution mismatch.