Uni-Hand: Universal Hand Motion Forecasting in Egocentric Views

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper introduces the novel task of “temporal interaction localization” for precise timestamping of hand-object contact and separation events in first-person videos. To overcome limitations of existing methods—namely their reliance on object masks and action-level annotations—we propose EgoLoc, the first zero-shot framework for this task. EgoLoc generates visual prompts via dynamic hand-region sampling, leverages vision-language models for attribute recognition and temporal localization, and incorporates a self-feedback closed-loop optimization mechanism. Critically, it requires no interaction category labels or pixel-level supervision. Evaluated on both public and newly constructed benchmarks, EgoLoc achieves state-of-the-art temporal accuracy and demonstrates superior cross-scene generalization. Its effectiveness is validated in downstream applications including mixed-reality immersive interaction and robot autonomous manipulation.

Technology Category

Application Category

📝 Abstract
Analyzing hand-object interaction in egocentric vision facilitates VR/AR applications and human-robot policy transfer. Existing research has mostly focused on modeling the behavior paradigm of interactive actions (i.e., "how to interact"). However, the more challenging and fine-grained problem of capturing the critical moments of contact and separation between the hand and the target object (i.e., "when to interact") is still underexplored, which is crucial for immersive interactive experiences in mixed reality and robotic motion planning. Therefore, we formulate this problem as temporal interaction localization (TIL). Some recent works extract semantic masks as TIL references, but suffer from inaccurate object grounding and cluttered scenarios. Although current temporal action localization (TAL) methods perform well in detecting verb-noun action segments, they rely on category annotations during training and exhibit limited precision in localizing hand-object contact/separation moments. To address these issues, we propose a novel zero-shot approach dubbed EgoLoc to localize hand-object contact and separation timestamps in egocentric videos. EgoLoc introduces hand-dynamics-guided sampling to generate high-quality visual prompts. It exploits the vision-language model to identify contact/separation attributes, localize specific timestamps, and provide closed-loop feedback for further refinement. EgoLoc eliminates the need for object masks and verb-noun taxonomies, leading to generalizable zero-shot implementation. Comprehensive experiments on the public dataset and our novel benchmarks demonstrate that EgoLoc achieves plausible TIL for egocentric videos. It is also validated to effectively facilitate multiple downstream applications in egocentric vision and robotic manipulation tasks. Code and relevant data will be released at https://github.com/IRMVLab/EgoLoc.
Problem

Research questions and friction points this paper is trying to address.

Localizing hand-object contact and separation timestamps in egocentric videos
Addressing inaccurate object grounding in temporal interaction localization
Eliminating dependency on category annotations for hand motion forecasting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zero-shot approach localizes hand-object contact timestamps
Uses hand-dynamics-guided sampling for visual prompts
Leverages vision-language model without object masks
🔎 Similar Papers
No similar papers found.
J
Junyi Ma
IRMV Lab, the Department of Automation, Shanghai Jiao Tong University
Wentao Bao
Wentao Bao
Research Scientist at Meta
Computer VisionMachine Learning
J
Jingyi Xu
Department of Electronic Engineering, Shanghai Jiao Tong University
G
Guanzhong Sun
School of Information and Control Engineering, China University of Mining and Technology
Y
Yu Zheng
IRMV Lab, the Department of Automation, Shanghai Jiao Tong University
E
Erhang Zhang
IRMV Lab, the Department of Automation, Shanghai Jiao Tong University
Xieyuanli Chen
Xieyuanli Chen
Associate Professor, NUDT, China
RoboticsSLAMLocalizationLiDAR PerceptionRobot Learning
H
Hesheng Wang
IRMV Lab, the Department of Automation, Shanghai Jiao Tong University