🤖 AI Summary
Multimodal robot policy learning suffers from a significant simulation-to-reality (Sim2Real) gap for modalities such as audio, which are difficult to simulate physically. Method: This paper proposes a generative multimodal simulation framework that integrates physics-based simulation with large models. Specifically, it introduces, for the first time, a conditional generative adversarial network (cGAN) embedded within a physics engine, using rendered video frames as conditions to synthesize high-fidelity, temporally synchronized audio—enabling end-to-end trainable visuo-auditory simulation without requiring real-world audio-video paired data. Contribution/Results: The approach enables fully simulation-based multimodal policy learning and achieves zero-shot Sim2Real transfer on a dynamic real-robot water-pouring task—generalizing successfully to unseen containers and liquids using only simulated training. Experiments substantially narrow the multimodal Sim2Real gap, establishing a new paradigm for complex perception–action coordination.
📝 Abstract
Robots must integrate multiple sensory modalities to act effectively in the real world. Yet, learning such multimodal policies at scale remains challenging. Simulation offers a viable solution, but while vision has benefited from high-fidelity simulators, other modalities (e.g. sound) can be notoriously difficult to simulate. As a result, sim-to-real transfer has succeeded primarily in vision-based tasks, with multimodal transfer still largely unrealized. In this work, we tackle these challenges by introducing MultiGen, a framework that integrates large-scale generative models into traditional physics simulators, enabling multisensory simulation. We showcase our framework on the dynamic task of robot pouring, which inherently relies on multimodal feedback. By synthesizing realistic audio conditioned on simulation video, our method enables training on rich audiovisual trajectories -- without any real robot data. We demonstrate effective zero-shot transfer to real-world pouring with novel containers and liquids, highlighting the potential of generative modeling to both simulate hard-to-model modalities and close the multimodal sim-to-real gap.