Provable Sim-to-Real Transfer via Offline Domain Randomization

πŸ“… 2025-06-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Poor generalization of reinforcement learning agents in simulation-to-reality (Sim2Real) transfer remains a key challenge. Existing domain randomization (DR) methods neglect readily available offline real-world data. To address this, we propose Offline Domain Randomization (ODR), which leverages offline real-system data to estimate the distribution of simulator dynamics parameters, enabling zero-shot robust deployment. Theoretically, we establish the first statistical foundation for ODR, proving consistency of the parameter distribution estimator and deriving a policy error bound that is tighter by a factor of *O*(*M*) than uniform DR. Algorithmically, we design Entropy-regularized Distributionally Robust Policy Optimization (E-DROPO) to mitigate variance collapse during optimization. Empirical evaluation across multiple robotic control tasks demonstrates that ODR significantly improves both the breadth of randomization and zero-shot Sim2Real transfer performance.

Technology Category

Application Category

πŸ“ Abstract
Reinforcement-learning agents often struggle when deployed from simulation to the real-world. A dominant strategy for reducing the sim-to-real gap is domain randomization (DR) which trains the policy across many simulators produced by sampling dynamics parameters, but standard DR ignores offline data already available from the real system. We study offline domain randomization (ODR), which first fits a distribution over simulator parameters to an offline dataset. While a growing body of empirical work reports substantial gains with algorithms such as DROPO, the theoretical foundations of ODR remain largely unexplored. In this work, we (i) formalize ODR as a maximum-likelihood estimation over a parametric simulator family, (ii) prove consistency of this estimator under mild regularity and identifiability conditions, showing it converges to the true dynamics as the dataset grows, (iii) derive gap bounds demonstrating ODRs sim-to-real error is up to an O(M) factor tighter than uniform DR in the finite-simulator case (and analogous gains in the continuous setting), and (iv) introduce E-DROPO, a new version of DROPO which adds an entropy bonus to prevent variance collapse, yielding broader randomization and more robust zero-shot transfer in practice.
Problem

Research questions and friction points this paper is trying to address.

Bridging sim-to-real gap using offline domain randomization
Theoretical analysis of ODR's consistency and error bounds
Enhancing DROPO with entropy for robust zero-shot transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Offline domain randomization fits simulator parameters
Maximum-likelihood estimation ensures consistent dynamics
Entropy bonus prevents variance collapse in E-DROPO
πŸ”Ž Similar Papers
No similar papers found.