Safe Domain Randomization via Uncertainty-Aware Out-of-Distribution Detection and Policy Adaptation

๐Ÿ“… 2025-07-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Deploying reinforcement learning (RL) policies in real-world settings faces three key challenges: distributional shift, safety constraints, and the inability to interact with the target domain. To address these, we propose a safe cross-domain generalization framework that requires no direct interaction with the target domain. First, we quantify policy uncertainty via critic ensembles to enable high-confidence out-of-distribution (OOD) detection. Second, we integrate progressive environmental randomization within simulation to iteratively refine the policy specifically in high-uncertainty regionsโ€”thereby jointly enhancing safety and robustness. Unlike conventional domain randomization or non-dynamic RL approaches, our method eliminates reliance on trial-and-error interactions in the target domain. Experiments on MuJoCo benchmarks and a quadruped robot platform demonstrate significant improvements in OOD detection reliability, cross-domain transfer performance, and sample efficiency over existing baselines.

Technology Category

Application Category

๐Ÿ“ Abstract
Deploying reinforcement learning (RL) policies in real-world involves significant challenges, including distribution shifts, safety concerns, and the impracticality of direct interactions during policy refinement. Existing methods, such as domain randomization (DR) and off-dynamics RL, enhance policy robustness by direct interaction with the target domain, an inherently unsafe practice. We propose Uncertainty-Aware RL (UARL), a novel framework that prioritizes safety during training by addressing Out-Of-Distribution (OOD) detection and policy adaptation without requiring direct interactions in target domain. UARL employs an ensemble of critics to quantify policy uncertainty and incorporates progressive environmental randomization to prepare the policy for diverse real-world conditions. By iteratively refining over high-uncertainty regions of the state space in simulated environments, UARL enhances robust generalization to the target domain without explicitly training on it. We evaluate UARL on MuJoCo benchmarks and a quadrupedal robot, demonstrating its effectiveness in reliable OOD detection, improved performance, and enhanced sample efficiency compared to baselines.
Problem

Research questions and friction points this paper is trying to address.

Detects Out-Of-Distribution states safely during RL training
Adapts policies without direct target domain interactions
Enhances robustness via uncertainty-aware progressive randomization
Innovation

Methods, ideas, or system contributions that make the work stand out.

UARL detects OOD states without target interaction
Ensemble critics quantify policy uncertainty effectively
Progressive randomization enhances robust generalization
๐Ÿ”Ž Similar Papers
No similar papers found.