Dyna-Think: Synergizing Reasoning, Acting, and World Model Simulation in AI Agents

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of long-horizon AI agents—specifically, the absence of embodied cognitive closure, insufficient cross-step planning, and weak environment interaction—this paper proposes Dyna-Think, a dynamic reasoning framework integrating reasoning, action execution, and internal world-model simulation. We introduce a novel two-stage training paradigm: Dyna-Think Imitation Learning (DIT) and Dyna-Think Dyna Training (DDT). Critically, we demonstrate for the first time that using critique generation as the objective for world-model training significantly improves policy performance and establish a strong positive correlation between world-model fidelity and agent efficacy. Experiments on OSWorld show consistent improvements both within and across domains, achieving best-of-n performance comparable to DeepSeek-R1 while reducing average token consumption by 50%.

Technology Category

Application Category

📝 Abstract
Recent progress in reasoning with large language models (LLMs), such as DeepSeek-R1, demonstrates impressive capabilities in domains like mathematics and coding, by exhibiting complex cognitive behaviors such as verification, goal decomposition, and self-reflection. However, it is unclear what behavior is effective and what behavior is missing for long-horizon AI agents tasks. In this work, we propose Dyna-Think, a thinking framework that integrates planning with an internal world model with reasoning and acting to enhance AI agent performance. To enable Dyna-Think, we propose Dyna-Think Imitation Learning (DIT) and Dyna-Think Dyna Training (DDT). To initialize a policy with Dyna-Think, DIT reconstructs the thinking process of R1 to focus on performing world model simulation relevant to the proposed (and planned) action, and trains the policy using this reconstructed data. To enhance Dyna-Think, DDT uses a two-stage training process to first improve the agent's world modeling ability via objectives such as state prediction or critique generation, and then improve the agent's action via policy training. We evaluate our methods on OSWorld, and demonstrate that Dyna-Think improves the agent's in-domain and out-of-domain performance, achieving similar best-of-n performance compared to R1 while generating 2x less tokens on average. Our extensive empirical studies reveal that 1) using critique generation for world model training is effective to improve policy performance; and 2) AI agents with better performance correlate with better world modeling abilities. We believe our results suggest a promising research direction to integrate world model simulation into AI agents to enhance their reasoning, planning, and acting capabilities.
Problem

Research questions and friction points this paper is trying to address.

Enhancing AI agent performance via reasoning, acting, and world model integration
Identifying effective behaviors for long-horizon AI tasks through Dyna-Think framework
Improving agent efficiency by reducing token generation while maintaining performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates planning with world model simulation
Uses Dyna-Think Imitation Learning for policy
Employs two-stage Dyna-Think Dyna Training
🔎 Similar Papers
No similar papers found.