🤖 AI Summary
Human-Scene Interaction (HSI) faces core challenges including low fidelity in long-horizon behavior generation, poor cross-scene generalization, and physically implausible motion. To address these, we propose a video-generation-oriented dynamic directed graph multi-agent framework comprising navigation, planning, and critique agents. High-level path planning and atomic action decomposition ensure task-level logical consistency; a dedicated critique agent establishes a closed-loop feedback mechanism to suppress trajectory drift; and Direct Preference Optimization (DPO) is applied to refine the action generator, substantially mitigating foot sliding and limb deformation artifacts. Evaluated on our newly constructed SceneBench benchmark, our method achieves significant improvements over state-of-the-art approaches in three key metrics: long-horizon task completion rate, cross-scene generalization capability, and synthesis of physically realistic human motion.
📝 Abstract
Human-Scene Interaction (HSI) seeks to generate realistic human behaviors within complex environments, yet it faces significant challenges in handling long-horizon, high-level tasks and generalizing to unseen scenes. To address these limitations, we introduce FantasyHSI, a novel HSI framework centered on video generation and multi-agent systems that operates without paired data. We model the complex interaction process as a dynamic directed graph, upon which we build a collaborative multi-agent system. This system comprises a scene navigator agent for environmental perception and high-level path planning, and a planning agent that decomposes long-horizon goals into atomic actions. Critically, we introduce a critic agent that establishes a closed-loop feedback mechanism by evaluating the deviation between generated actions and the planned path. This allows for the dynamic correction of trajectory drifts caused by the stochasticity of the generative model, thereby ensuring long-term logical consistency. To enhance the physical realism of the generated motions, we leverage Direct Preference Optimization (DPO) to train the action generator, significantly reducing artifacts such as limb distortion and foot-sliding. Extensive experiments on our custom SceneBench benchmark demonstrate that FantasyHSI significantly outperforms existing methods in terms of generalization, long-horizon task completion, and physical realism. Ours project page: https://fantasy-amap.github.io/fantasy-hsi/