🤖 AI Summary
Existing multimodal agents predominantly rely on reactive decision-making, lacking the capacity for forward-looking reasoning and thus struggling with complex, multi-step tasks. This work proposes TraceR1, a novel framework that introduces trajectory-level prospective planning into multimodal agents for the first time. TraceR1 employs a two-stage reinforcement learning approach to optimize long-horizon objectives: it first predicts short-horizon trajectories while ensuring global consistency, then refines actions using grounded feedback from frozen tool-specific agents. Evaluated across seven benchmarks spanning online and offline computer operation as well as multimodal tool reasoning, TraceR1 significantly outperforms both reactive and single-stage baselines, demonstrating enhanced planning stability, execution robustness, and generalization capability.
📝 Abstract
Recent advances in multimodal agents have improved computer-use interaction and tool-usage, yet most existing systems remain reactive, optimizing actions in isolation without reasoning about future states or long-term goals. This limits planning coherence and prevents agents from reliably solving high-level, multi-step tasks. We introduce TraceR1, a two-stage reinforcement learning framework that explicitly trains anticipatory reasoning by forecasting short-horizon trajectories before execution. The first stage performs trajectory-level reinforcement learning with rewards that enforce global consistency across predicted action sequences. The second stage applies grounded reinforcement fine-tuning, using execution feedback from frozen tool agents to refine step-level accuracy and executability. TraceR1 is evaluated across seven benchmarks, covering online computer-use, offline computer-use benchmarks, and multimodal tool-use reasoning tasks, where it achieves substantial improvements in planning stability, execution robustness, and generalization over reactive and single-stage baselines. These results show that anticipatory trajectory reasoning is a key principle for building multimodal agents that can reason, plan, and act effectively in complex real-world environments.