HVG-3D: Bridging Real and Simulation Domains for 3D-Conditional Hand-Object Interaction Video Synthesis

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for hand-object interaction video synthesis often rely on 2D control signals with limited expressive power, making precise spatiotemporal control challenging. This work proposes HVG-3D, a unified framework that introduces an explicit 3D conditional control mechanism for the first time. By integrating diffusion models with a 3D ControlNet, the approach jointly encodes geometric and motion cues and employs a hybrid training pipeline that combines real-world and synthetic data. Evaluated on the TASTE-Rob dataset, the method significantly improves spatial fidelity, temporal coherence, and controllability of generated videos, effectively enabling synergistic use of real images and 3D simulation-based control signals.
📝 Abstract
Recent methods have made notable progress in the visual quality of hand-object interaction video synthesis. However, most approaches rely on 2D control signals that lack spatial expressiveness and limit the utilization of synthetic 3D conditional data. To address these limitations, we propose HVG-3D, a unified framework for 3D-aware hand-object interaction (HOI) video synthesis conditioned on explicit 3D representations. Specifically, we develop a diffusion-based architecture augmented with a 3D ControlNet, which encodes geometric and motion cues from 3D inputs to enable explicit 3D reasoning during video synthesis. To achieve high-quality synthesis, HVG-3D is designed with two core components: (i) a 3D-aware HOI video generation diffusion architecture that encodes geometric and motion cues from 3D inputs for explicit 3D reasoning; and (ii) a hybrid pipeline for constructing input and condition signals, enabling flexible and precise control during both training and inference. During inference, given a single real image and a 3D control signal from either simulation or real data, HVG-3D generates high-fidelity, temporally consistent videos with precise spatial and temporal control. Experiments on the TASTE-Rob dataset demonstrate that HVG-3D achieves state-of-the-art spatial fidelity, temporal coherence, and controllability, while enabling effective utilization of both real and simulated data.
Problem

Research questions and friction points this paper is trying to address.

hand-object interaction
3D video synthesis
spatial expressiveness
3D control signals
simulation-to-real
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D-conditional synthesis
hand-object interaction
diffusion model
3D ControlNet
domain bridging
🔎 Similar Papers
No similar papers found.
M
Mingjin Chen
Dept. of EEE, The Hong Kong Polytechnic University
J
Junhao Chen
Tsinghua University
Z
Zhaoxin Fan
Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing, School of Artificial Intelligence, Beihang University
Y
Yujian Lee
Beijing Normal-Hong Kong Baptist University
Z
Zichen Dang
Dept. of EEE, The Hong Kong Polytechnic University
Lili Wang
Lili Wang
Professor,School of Computer Science and Engineering,Beihang University
virtual realitycomputer graphics3D visualization
Yawen Cui
Yawen Cui
University of Oulu
Few-Shot LearningContinual LearningMultimodal Learning
Lap-Pui Chau
Lap-Pui Chau
The Hong Kong Polytechnic University
Visual Signal Processing
Yi Wang
Yi Wang
The Hong Kong Polytechnic University
Biomaterials