MultiGen: Using Multimodal Generation in Simulation to Learn Multimodal Policies in Real

📅 2025-07-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal robot policy learning suffers from a significant simulation-to-reality (Sim2Real) gap for modalities such as audio, which are difficult to simulate physically. Method: This paper proposes a generative multimodal simulation framework that integrates physics-based simulation with large models. Specifically, it introduces, for the first time, a conditional generative adversarial network (cGAN) embedded within a physics engine, using rendered video frames as conditions to synthesize high-fidelity, temporally synchronized audio—enabling end-to-end trainable visuo-auditory simulation without requiring real-world audio-video paired data. Contribution/Results: The approach enables fully simulation-based multimodal policy learning and achieves zero-shot Sim2Real transfer on a dynamic real-robot water-pouring task—generalizing successfully to unseen containers and liquids using only simulated training. Experiments substantially narrow the multimodal Sim2Real gap, establishing a new paradigm for complex perception–action coordination.

Technology Category

Application Category

📝 Abstract
Robots must integrate multiple sensory modalities to act effectively in the real world. Yet, learning such multimodal policies at scale remains challenging. Simulation offers a viable solution, but while vision has benefited from high-fidelity simulators, other modalities (e.g. sound) can be notoriously difficult to simulate. As a result, sim-to-real transfer has succeeded primarily in vision-based tasks, with multimodal transfer still largely unrealized. In this work, we tackle these challenges by introducing MultiGen, a framework that integrates large-scale generative models into traditional physics simulators, enabling multisensory simulation. We showcase our framework on the dynamic task of robot pouring, which inherently relies on multimodal feedback. By synthesizing realistic audio conditioned on simulation video, our method enables training on rich audiovisual trajectories -- without any real robot data. We demonstrate effective zero-shot transfer to real-world pouring with novel containers and liquids, highlighting the potential of generative modeling to both simulate hard-to-model modalities and close the multimodal sim-to-real gap.
Problem

Research questions and friction points this paper is trying to address.

Integrating multiple sensory modalities for effective robot actions
Challenges in simulating non-visual modalities like sound
Closing the multimodal sim-to-real transfer gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates generative models into physics simulators
Synthesizes realistic audio from simulation video
Enables zero-shot transfer to real-world tasks
🔎 Similar Papers
No similar papers found.