🤖 AI Summary
Addressing the dual challenges of temporal action modeling and spatial object relationship understanding in long-horizon visual imitation, this paper proposes a dual-reflection agent framework integrating high-level plan generation with low-level executable code generation. The framework introduces the first “plan–code” co-verification mechanism, jointly enforcing semantic and structural consistency to iteratively optimize temporal coherence and spatial alignment, while enabling error detection and self-correction. It generates both abstract plans and concrete, executable code end-to-end from video demonstrations, leveraging code as a precise, deterministic policy representation to enhance behavioral reliability. To advance the field, we introduce LongVILBench—the first long-sequence visual imitation benchmark—comprising 300 complex human demonstrations. Experiments reveal severe performance degradation of existing methods on this benchmark, whereas our framework establishes new strong baselines across diverse, challenging tasks.
📝 Abstract
Learning from long-horizon demonstrations with complex action sequences presents significant challenges for visual imitation learning, particularly in understanding temporal relationships of actions and spatial relationships between objects. In this paper, we propose a new agent framework that incorporates two dedicated reflection modules to enhance both plan and code generation. The plan generation module produces an initial action sequence, which is then verified by the plan reflection module to ensure temporal coherence and spatial alignment with the demonstration video. The code generation module translates the plan into executable code, while the code reflection module verifies and refines the generated code to ensure correctness and consistency with the generated plan. These two reflection modules jointly enable the agent to detect and correct errors in both the plan generation and code generation, improving performance in tasks with intricate temporal and spatial dependencies. To support systematic evaluation, we introduce LongVILBench, a benchmark comprising 300 human demonstrations with action sequences of up to 18 steps. LongVILBench emphasizes temporal and spatial complexity across multiple task types. Experimental results demonstrate that existing methods perform poorly on this benchmark, whereas our new framework establishes a strong baseline for long-horizon visual imitation learning.