🤖 AI Summary
This paper introduces the novel task of “temporal interaction localization” for precise timestamping of hand-object contact and separation events in first-person videos. To overcome limitations of existing methods—namely their reliance on object masks and action-level annotations—we propose EgoLoc, the first zero-shot framework for this task. EgoLoc generates visual prompts via dynamic hand-region sampling, leverages vision-language models for attribute recognition and temporal localization, and incorporates a self-feedback closed-loop optimization mechanism. Critically, it requires no interaction category labels or pixel-level supervision. Evaluated on both public and newly constructed benchmarks, EgoLoc achieves state-of-the-art temporal accuracy and demonstrates superior cross-scene generalization. Its effectiveness is validated in downstream applications including mixed-reality immersive interaction and robot autonomous manipulation.
📝 Abstract
Analyzing hand-object interaction in egocentric vision facilitates VR/AR applications and human-robot policy transfer. Existing research has mostly focused on modeling the behavior paradigm of interactive actions (i.e., "how to interact"). However, the more challenging and fine-grained problem of capturing the critical moments of contact and separation between the hand and the target object (i.e., "when to interact") is still underexplored, which is crucial for immersive interactive experiences in mixed reality and robotic motion planning. Therefore, we formulate this problem as temporal interaction localization (TIL). Some recent works extract semantic masks as TIL references, but suffer from inaccurate object grounding and cluttered scenarios. Although current temporal action localization (TAL) methods perform well in detecting verb-noun action segments, they rely on category annotations during training and exhibit limited precision in localizing hand-object contact/separation moments. To address these issues, we propose a novel zero-shot approach dubbed EgoLoc to localize hand-object contact and separation timestamps in egocentric videos. EgoLoc introduces hand-dynamics-guided sampling to generate high-quality visual prompts. It exploits the vision-language model to identify contact/separation attributes, localize specific timestamps, and provide closed-loop feedback for further refinement. EgoLoc eliminates the need for object masks and verb-noun taxonomies, leading to generalizable zero-shot implementation. Comprehensive experiments on the public dataset and our novel benchmarks demonstrate that EgoLoc achieves plausible TIL for egocentric videos. It is also validated to effectively facilitate multiple downstream applications in egocentric vision and robotic manipulation tasks. Code and relevant data will be released at https://github.com/IRMVLab/EgoLoc.