🤖 AI Summary
To address the challenges of modeling future action sequences and weak cross-modal temporal reasoning in the Ego4D Long-Term Action (LTA) prediction task, this paper proposes a three-stage framework: (1) a high-performance vision encoder for frame-level feature extraction; (2) a Transformer module incorporating verb–noun co-occurrence matrix embeddings to enable fine-grained joint recognition; and (3) a fine-tuned large language model (LLM) augmented with semantic prompt engineering to map verb–noun pairs into natural-language action sequences. Our work pioneers the explicit integration of co-occurrence statistics—derived from prior linguistic knowledge—into the visual recognition module and establishes an end-to-end cross-modal long-horizon generation paradigm. Evaluated on the CVPR 2025 Ego4D LTA Challenge, our method achieves first place and sets a new state-of-the-art. The code is publicly available.
📝 Abstract
In this report, we present a novel three-stage framework developed for the Ego4D Long-Term Action Anticipation (LTA) task. Inspired by recent advances in foundation models, our method consists of three stages: feature extraction, action recognition, and long-term action anticipation. First, visual features are extracted using a high-performance visual encoder. The features are then fed into a Transformer to predict verbs and nouns, with a verb-noun co-occurrence matrix incorporated to enhance recognition accuracy. Finally, the predicted verb-noun pairs are formatted as textual prompts and input into a fine-tuned large language model (LLM) to anticipate future action sequences. Our framework achieves first place in this challenge at CVPR 2025, establishing a new state-of-the-art in long-term action prediction. Our code will be released at https://github.com/CorrineQiu/Ego4D-LTA-Challenge-2025.