DIV-FF: Dynamic Image-Video Feature Fields For Environment Understanding in Egocentric Videos

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of semantic-geometric disentanglement and weak temporal consistency in dynamic interaction scenes within first-person videos, this paper proposes a decoupled dynamic feature field modeling framework. Methodologically, it explicitly decomposes the scene into three components—persistent, dynamic, and agent-centric—and jointly leverages image-level and video-level language features to construct temporally coherent implicit dynamic feature fields. It further introduces a multi-granularity spatiotemporal alignment mechanism and an image-video joint language-guided feature fusion strategy. Evaluated on multiple egocentric understanding benchmarks, the method achieves significant improvements over state-of-the-art approaches, particularly in dynamic object segmentation and affordance recognition. These advances enable robust, long-horizon environmental modeling with enhanced temporal stability and semantic fidelity.

Technology Category

Application Category

📝 Abstract
Environment understanding in egocentric videos is an important step for applications like robotics, augmented reality and assistive technologies. These videos are characterized by dynamic interactions and a strong dependence on the wearer engagement with the environment. Traditional approaches often focus on isolated clips or fail to integrate rich semantic and geometric information, limiting scene comprehension. We introduce Dynamic Image-Video Feature Fields (DIV FF), a framework that decomposes the egocentric scene into persistent, dynamic, and actor based components while integrating both image and video language features. Our model enables detailed segmentation, captures affordances, understands the surroundings and maintains consistent understanding over time. DIV-FF outperforms state-of-the-art methods, particularly in dynamically evolving scenarios, demonstrating its potential to advance long term, spatio temporal scene understanding.
Problem

Research questions and friction points this paper is trying to address.

Enhances environment understanding in egocentric videos.
Integrates semantic and geometric information for scene comprehension.
Improves long-term, spatio-temporal understanding in dynamic scenarios.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes egocentric scenes into dynamic components
Integrates image and video language features
Enables detailed segmentation and consistent understanding
🔎 Similar Papers
No similar papers found.