LOME: Learning Human-Object Manipulation with Action-Conditioned Egocentric World Model

📅 2026-03-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization and practical deployability of traditional physics-based animation in fine-grained human–object interactions by proposing an egocentric world model that end-to-end generates controllable interaction videos with physically plausible outcomes. The model integrates input images, textual prompts, and fine-grained human motion—including body pose and hand gestures—to jointly encode precise action guidance and environmental context, achieving high-fidelity simulation without explicit 3D or 4D reconstruction. Experiments demonstrate that the approach significantly outperforms existing image- or video-conditioned generation methods and image/text-to-video (I/T2V) models in temporal consistency and action controllability, while also enabling generalization to unseen scenes and producing realistic physical effects.
📝 Abstract
Learning human-object manipulation presents significant challenges due to its fine-grained and contact-rich nature of the motions involved. Traditional physics-based animation requires extensive modeling and manual setup, and more importantly, it neither generalizes well across diverse object morphologies nor scales effectively to real-world environment. To address these limitations, we introduce LOME, an egocentric world model that can generate realistic human-object interactions as videos conditioned on an input image, a text prompt, and per-frame human actions, including both body poses and hand gestures. LOME injects strong and precise action guidance into object manipulation by jointly estimating spatial human actions and the environment contexts during training. After finetuning a pretrained video generative model on videos of diverse egocentric human-object interactions, LOME demonstrates not only high action-following accuracy and strong generalization to unseen scenarios, but also realistic physical consequences of hand-object interactions, e.g., liquid flowing from a bottle into a mug after executing a ``pouring'' action. Extensive experiments demonstrate that our video-based framework significantly outperforms state-of-the-art image based and video-based action-conditioned methods and Image/Text-to-Video (I/T2V) generative model in terms of both temporal consistency and motion control. LOME paves the way for photorealistic AR/VR experiences and scalable robotic training, without being limited to simulated environments or relying on explicit 3D/4D modeling.
Problem

Research questions and friction points this paper is trying to address.

human-object manipulation
egocentric video generation
action-conditioned generation
physical realism
motion control
Innovation

Methods, ideas, or system contributions that make the work stand out.

egocentric world model
action-conditioned video generation
human-object interaction
physical realism
motion control
🔎 Similar Papers
No similar papers found.