CRISP: Contact-Guided Real2Sim from Monocular Video with Planar Scene Primitives

๐Ÿ“… 2025-12-16
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing monocular video-based methods for human motion and scene geometry reconstruction suffer from strong reliance on data priors, absence of physical constraints, and severe geometric noiseโ€”leading to frequent motion tracking failures. This paper proposes a contact-aware planar primitive reconstruction paradigm: first, clean and convexity-guaranteed scene geometry is constructed via clustering of depth, surface normals, and optical flow followed by robust plane fitting; second, occluded scene structures are inferred from estimated human poses and completed under contact guidance; third, physics-consistent human-scene interaction is simulated via reinforcement learning (RL). To our knowledge, this is the first work to jointly optimize geometry reconstruction, occlusion reasoning, and physics-based simulation. On EMDB and PROX, motion tracking failure rates drop from 55.2% to 6.9%, and RL simulation throughput increases by 43%. The framework enables end-to-end Real2Sim translation for in-the-wild and Sora-generated videos.

Technology Category

Application Category

๐Ÿ“ Abstract
We introduce CRISP, a method that recovers simulatable human motion and scene geometry from monocular video. Prior work on joint human-scene reconstruction relies on data-driven priors and joint optimization with no physics in the loop, or recovers noisy geometry with artifacts that cause motion tracking policies with scene interactions to fail. In contrast, our key insight is to recover convex, clean, and simulation-ready geometry by fitting planar primitives to a point cloud reconstruction of the scene, via a simple clustering pipeline over depth, normals, and flow. To reconstruct scene geometry that might be occluded during interactions, we make use of human-scene contact modeling (e.g., we use human posture to reconstruct the occluded seat of a chair). Finally, we ensure that human and scene reconstructions are physically-plausible by using them to drive a humanoid controller via reinforcement learning. Our approach reduces motion tracking failure rates from 55.2% to 6.9% on human-centric video benchmarks (EMDB, PROX), while delivering a 43% faster RL simulation throughput. We further validate it on in-the-wild videos including casually-captured videos, Internet videos, and even Sora-generated videos. This demonstrates CRISP's ability to generate physically-valid human motion and interaction environments at scale, greatly advancing real-to-sim applications for robotics and AR/VR.
Problem

Research questions and friction points this paper is trying to address.

Recovers simulatable human motion and scene geometry from monocular video.
Reconstructs clean, simulation-ready geometry using planar primitives and contact modeling.
Ensures physically-plausible reconstructions for robotics and AR/VR applications.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fits planar primitives to point clouds via clustering for clean geometry
Uses human-scene contact modeling to reconstruct occluded scene parts
Ensures physical plausibility by driving a humanoid controller with RL
๐Ÿ”Ž Similar Papers
No similar papers found.