SPIRAL: A Closed-Loop Framework for Self-Improving Action World Models via Reflective Planning Agents

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of weak semantic coherence, incomplete actions, and temporal drift in long-horizon video generation with open-loop models. To overcome these limitations, the authors propose ActWM, a closed-loop, self-improving action world model that implements a “plan–execute–reflect” framework. Specifically, a PlanAgent decomposes high-level semantic actions into object-centric sub-actions, while a CriticAgent provides iterative evaluation feedback enriched with long-term memory. The system is further refined through reinforcement learning to enable continuous optimization. This approach significantly enhances semantic alignment and temporal consistency, delivering consistent performance gains across multiple text-to-video backbone architectures. The effectiveness of ActWM is rigorously validated on a newly curated benchmark, comprising the ActWM-Dataset and ActWM-Bench.

Technology Category

Application Category

📝 Abstract
We introduce SPIRAL, a self-improving planning and iterative reflective action world modeling closed-loop framework that enables controllable long-horizon video generation conditioned on high-level semantic actions. Existing one-shot video generation models operate in open-loop, often resulting in incomplete action execution, weak semantic grounding, and temporal drift. SPIRAL formulates ActWM as a closed-loop think-act-reflect process, where generation proceeds step by step under explicit planning and feedback. A PlanAgent decomposes abstract actions into object-centric sub-actions, while a CriticAgent evaluates intermediate results and guides iterative refinement with long-horizon memory. This closed-loop design naturally supports RL evolving optimization, improving semantic alignment and temporal consistency over extended horizons. We further introduce the ActWM-Dataset and ActWM-Bench for training and evaluation. Experiments across multiple TI2V backbones demonstrate consistent gains on ActWM-Bench and mainstream video generation benchmarks, validating SPIRAL's effectiveness.
Problem

Research questions and friction points this paper is trying to address.

video generation
long-horizon
semantic actions
temporal consistency
action execution
Innovation

Methods, ideas, or system contributions that make the work stand out.

closed-loop planning
self-improving world models
reflective agents
long-horizon video generation
semantic action grounding
🔎 Similar Papers