🤖 AI Summary
To address three key challenges in long-term action forecasting from first-person videos—coarse hand-object interaction modeling, semantic disconnection between verbs and nouns, and lack of cognitive reasoning—this paper proposes INSIGHT, a two-stage framework. Methodologically, INSIGHT (1) explicitly models visual features within hand-object interaction regions; (2) constructs a verb-noun co-occurrence matrix to enhance semantic协同 representation; and (3) introduces, for the first time, a reinforcement learning–driven cognitive reasoning module that enables end-to-end mapping from perception to intention inference and long-horizon action prediction. Evaluated on Ego4D, EPIC-Kitchens-55, and EGTEA Gaze+, INSIGHT achieves state-of-the-art performance, significantly improving both prediction accuracy and cross-scenario generalization capability.
📝 Abstract
Long-term action anticipation from egocentric video is critical for applications such as human-computer interaction and assistive technologies, where anticipating user intent enables proactive and context-aware AI assistance. However, existing approaches suffer from three key limitations: 1) underutilization of fine-grained visual cues from hand-object interactions, 2) neglect of semantic dependencies between verbs and nouns, and 3) lack of explicit cognitive reasoning, limiting generalization and long-term forecasting ability. To overcome these challenges, we propose INSIGHT, a unified two-stage framework for egocentric action anticipation. In the first stage, INSIGHT focuses on extracting semantically rich features from hand-object interaction regions and enhances action representations using a verb-noun co-occurrence matrix. In the second stage, it introduces a reinforcement learning-based module that simulates explicit cognitive reasoning through a structured process: visual perception (think) -> intention inference (reason) -> action anticipation (answer). Extensive experiments on Ego4D, EPIC-Kitchens-55, and EGTEA Gaze+ benchmarks show that INSIGHT achieves state-of-the-art performance, demonstrating its effectiveness and strong generalization capability.