🤖 AI Summary
This work addresses the critical challenge of short-term object interaction forecasting in first-person videos, which entails predicting the location, category, action, and contact time of the next interacted object. To this end, the authors propose the STAformer family of architectures, which innovatively integrates environmental affordance memory with hand-object trajectory hotspot prediction. The approach features frame-guided temporal pooling, dual-modality image-video attention, and a multi-scale feature adaptive fusion mechanism, culminating in a novel attention architecture termed STAformer++. Evaluated on the Ego4D benchmark and a newly curated EPIC-Kitchens STA dataset, the model achieves substantial improvements of 23 and 31 percentage points, respectively, in Overall Top-5 mAP over existing methods.
📝 Abstract
Short-Term object-interaction Anticipation (STA) consists in detecting the location of the next-active objects, the noun and verb categories of the interaction, as well as the time to contact from the observation of egocentric video. This ability is fundamental for wearable assistants to understand user's goals and provide timely assistance, or to enable human-robot interaction. In this work, we present a method to improve the performance of STA predictions. Our contributions are two-fold: 1) We propose STAformer and STAformer++, two novel attention-based architectures integrating frame-guided temporal pooling, dual image-video attention, and multiscale feature fusion to support STA predictions from an image-input video pair; 2) We introduce two novel modules to ground STA predictions on human behavior by modeling affordances. First, we integrate an environment affordance model which acts as a persistent memory of interactions that can take place in a given physical scene. We explore how to integrate environment affordances via simple late fusion and with an approach which adaptively learns how to best fuse affordances with end-to-end predictions. Second, we predict interaction hotspots from the observation of hands and object trajectories, increasing confidence in STA predictions localized around the hotspot. Our results show significant improvements on Overall Top-5 mAP, with gain up to $+23\%$ on Ego4D and $+31\%$ on a novel set of curated EPIC-Kitchens STA labels. We released the code, annotations, and pre-extracted affordances on Ego4D and EPIC-Kitchens to encourage future research in this area.