Int3DNet: Scene-Motion Cross Attention Network for 3D Intention Prediction in Mixed Reality

πŸ“… 2026-03-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Accurate prediction of users’ 3D intention regions in mixed reality is hindered by the lack of effective perceptual mechanisms, leading to interaction latency and fragmented user experience. This work proposes the first cross-attention architecture that jointly leverages scene geometry (represented as point clouds) and sparse head-hand motion cues to enable end-to-end prediction of 3D intention regions without requiring explicit object recognition. By integrating a scene-motion cross-attention mechanism with temporal modeling, the method achieves robust performance in unseen environments. Evaluated on the MoGaze and CIRCLE datasets, it significantly outperforms existing baselines, maintaining stable accuracy within 1500 ms and successfully enabling efficient intention-region-based visual question answering applications.

Technology Category

Application Category

πŸ“ Abstract
We propose Int3DNet, a scene-aware network that predicts 3D intention areas directly from scene geometry and head-hand motion cues, enabling robust human intention prediction without explicit object-level perception. In Mixed Reality (MR), intention prediction is critical as it enables the system to anticipate user actions and respond proactively, reducing interaction delays and ensuring seamless user experiences. Our method employs a cross attention fusion of sparse motion cues and scene point clouds, offering a novel approach that directly interprets the user's spatial intention within the scene. We evaluated Int3DNet on MoGaze and CIRCLE datasets, which are public datasets for full-body human-scene interactions, showing consistent performance across time horizons of up to 1500 ms and outperforming the baselines, even in diverse and unseen scenes. Moreover, we demonstrate the usability of proposed method through a demonstration of efficient visual question answering (VQA) based on intention areas. Int3DNet provides reliable 3D intention areas derived from head-hand motion and scene geometry, thus enabling seamless interaction between humans and MR systems through proactive processing of intention areas.
Problem

Research questions and friction points this paper is trying to address.

3D intention prediction
Mixed Reality
scene geometry
motion cues
human intention
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-attention fusion
3D intention prediction
scene-motion integration
mixed reality interaction
geometry-based intention modeling
πŸ”Ž Similar Papers
No similar papers found.