Policy Regularization on Globally Accessible States in Cross-Dynamics Reinforcement Learning

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In cross-dynamics reinforcement learning, expert policies fail under imitation learning due to dynamics mismatch: states visited in the source environment may be unreachable in the target environment. To address this, we propose a policy regularization framework grounded in *globally reachable states*—states accessible across all dynamics configurations. We formally define this notion and establish an F-distance-based regularization theory, leading to the ASOR algorithm. ASOR jointly optimizes for reward maximization and observation-based imitation, applying distribution alignment constraints *only* on globally reachable states—thereby excluding unreachable ones from interference. The framework is compatible with both offline and off-policy RL and supports plug-and-play integration across paradigms. Extensive evaluation on multiple benchmarks demonstrates significant improvements over state-of-the-art methods, validating its robustness and generalization capability across dynamics shifts.

Technology Category

Application Category

📝 Abstract
To learn from data collected in diverse dynamics, Imitation from Observation (IfO) methods leverage expert state trajectories based on the premise that recovering expert state distributions in other dynamics facilitates policy learning in the current one. However, Imitation Learning inherently imposes a performance upper bound of learned policies. Additionally, as the environment dynamics change, certain expert states may become inaccessible, rendering their distributions less valuable for imitation. To address this, we propose a novel framework that integrates reward maximization with IfO, employing F-distance regularized policy optimization. This framework enforces constraints on globally accessible states--those with nonzero visitation frequency across all considered dynamics--mitigating the challenge posed by inaccessible states. By instantiating F-distance in different ways, we derive two theoretical analysis and develop a practical algorithm called Accessible State Oriented Policy Regularization (ASOR). ASOR serves as a general add-on module that can be incorporated into various RL approaches, including offline RL and off-policy RL. Extensive experiments across multiple benchmarks demonstrate ASOR's effectiveness in enhancing state-of-the-art cross-domain policy transfer algorithms, significantly improving their performance.
Problem

Research questions and friction points this paper is trying to address.

Enhances policy learning in diverse dynamics using accessible states.
Addresses performance limits in Imitation Learning by integrating reward maximization.
Improves cross-domain policy transfer with Accessible State Oriented Policy Regularization.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates reward maximization with Imitation from Observation
Uses F-distance regularized policy optimization
Develops Accessible State Oriented Policy Regularization (ASOR)
🔎 Similar Papers
No similar papers found.