🤖 AI Summary
To address the insufficient robustness of robot behavior prediction under sparse visual observations, this paper proposes an iterative counterfactual exploration framework grounded in vision-language models (VLMs). The method integrates tree-of-thought reasoning with trajectory-level counterfactual hypothesis generation and—novelty—incorporates a domain-aware counterfactual critique mechanism into the VLM’s reasoning loop, enabling edge-case-driven self-improvement of predictions. Unlike single-pass forward prediction, the framework performs multi-step hypothesis generation, evaluation, and refinement, substantially enhancing modeling capability for rare and critical behaviors. Evaluated on ground vehicle simulation and real-world autonomous marine vessel tasks, it achieves over 42% improvement in rare/critical behavior capture rate compared to standard VLMs and pure model-based baselines, while simultaneously increasing prediction diversity and robustness.
📝 Abstract
Predicting the near-term behavior of a reactive agent is crucial in many robotic scenarios, yet remains challenging when observations of that agent are sparse or intermittent. Vision-Language Models (VLMs) offer a promising avenue by integrating textual domain knowledge with visual cues, but their one-shot predictions often miss important edge cases and unusual maneuvers. Our key insight is that iterative, counterfactual exploration--where a dedicated module probes each proposed behavior hypothesis, explicitly represented as a plausible trajectory, for overlooked possibilities--can significantly enhance VLM-based behavioral forecasting. We present TRACE (Tree-of-thought Reasoning And Counterfactual Exploration), an inference framework that couples tree-of-thought generation with domain-aware feedback to refine behavior hypotheses over multiple rounds. Concretely, a VLM first proposes candidate trajectories for the agent; a counterfactual critic then suggests edge-case variations consistent with partial observations, prompting the VLM to expand or adjust its hypotheses in the next iteration. This creates a self-improving cycle where the VLM progressively internalizes edge cases from previous rounds, systematically uncovering not only typical behaviors but also rare or borderline maneuvers, ultimately yielding more robust trajectory predictions from minimal sensor data. We validate TRACE on both ground-vehicle simulations and real-world marine autonomous surface vehicles. Experimental results show that our method consistently outperforms standard VLM-driven and purely model-based baselines, capturing a broader range of feasible agent behaviors despite sparse sensing. Evaluation videos and code are available at trace-robotics.github.io.