🤖 AI Summary
This work addresses the simulation-to-reality (sim-to-real) gap arising from complex, dynamic, and non-smooth physical contacts by proposing an implicit alignment framework. Leveraging an off-the-shelf simulator as a prior, the method uniquely integrates real tactile contact information into neural dynamics modeling to construct a contact-aware forward dynamics network, enabling data-driven correction of simulated states. By explicitly capturing the discontinuities inherent in contact interactions, the approach significantly improves state prediction accuracy and substantially enhances the execution performance of policies trained purely in simulation when deployed in real-world environments.
📝 Abstract
High-fidelity physics simulation is essential for scalable robotic learning, but the sim-to-real gap persists, especially for tasks involving complex, dynamic, and discontinuous interactions like physical contacts. Explicit system identification, which tunes explicit simulator parameters, is often insufficient to align the intricate, high-dimensional, and state-dependent dynamics of the real world. To overcome this, we propose an implicit sim-to-real alignment framework that learns to directly align the simulator's dynamics with contact information. Our method treats the off-the-shelf simulator as a base prior and learns a contact-aware neural dynamics model to refine simulated states using real-world observations. We show that using tactile contact information from robotic hands can effectively model the non-smooth discontinuities inherent in contact-rich tasks, resulting in a neural dynamics model grounded by real-world data. We demonstrate that this learned forward dynamics model improves state prediction accuracy and can be effectively used to predict policy performance and refine policies trained purely in standard simulators, offering a scalable, data-driven approach to sim-to-real alignment.