🤖 AI Summary
This work addresses the insufficient accuracy and temporal consistency of long-horizon interactive video world models during exploration by proposing a reinforcement learning-based post-training framework. The approach introduces a clip-level rollout strategy, a complementary reward function that jointly optimizes interaction accuracy and visual quality, and an RL algorithm integrating negative perceptual fine-tuning with efficient reward modeling. Experiments on the WorldPlay model demonstrate that the proposed method significantly improves the accuracy of interactive responses and the visual fidelity of generated videos, effectively mitigates reward hacking, and enhances temporal coherence in long-duration video generation.
📝 Abstract
This work presents WorldCompass, a novel Reinforcement Learning (RL) post-training framework for the long-horizon, interactive video-based world models, enabling them to explore the world more accurately and consistently based on interaction signals. To effectively"steer"the world model's exploration, we introduce three core innovations tailored to the autoregressive video generation paradigm: 1) Clip-level rollout Strategy: We generate and evaluate multiple samples at a single target clip, which significantly boosts rollout efficiency and provides fine-grained reward signals. 2) Complementary Reward Functions: We design reward functions for both interaction-following accuracy and visual quality, which provide direct supervision and effectively suppress reward-hacking behaviors. 3) Efficient RL Algorithm: We employ the negative-aware fine-tuning strategy coupled with various efficiency optimizations to efficiently and effectively enhance model capacity. Evaluations on the SoTA open-source world model, WorldPlay, demonstrate that WorldCompass significantly improves interaction accuracy and visual fidelity across various scenarios.