🤖 AI Summary
This work addresses the critical challenge of human intent understanding and future behavior prediction in VR/AR systems. We propose a hierarchical, intent-aware dynamic graph convolutional network (GCN) framework that uniquely integrates cognition-driven intent modeling into dynamic GCNs. The method jointly learns high-level user motivations and fine-grained actions—such as gaze direction and object interaction—while incorporating historical human pose sequences and scene context to model the spatiotemporal evolution of human–environment interactions. Evaluated on real-world benchmark datasets and an in-the-loop real-time VR environment, our approach achieves significant improvements over state-of-the-art methods in prediction accuracy, temporal consistency, and cross-scenario generalizability. It establishes a novel paradigm for building proactive, adaptive intelligent VR/AR systems capable of anticipating user behavior.
📝 Abstract
Virtual and augmented reality systems increasingly demand intelligent adaptation to user behaviors for enhanced interaction experiences. Achieving this requires accurately understanding human intentions and predicting future situated behaviors - such as gaze direction and object interactions - which is vital for creating responsive VR/AR environments and applications like personalized assistants. However, accurate behavioral prediction demands modeling the underlying cognitive processes that drive human-environment interactions. In this work, we introduce a hierarchical, intention-aware framework that models human intentions and predicts detailed situated behaviors by leveraging cognitive mechanisms. Given historical human dynamics and the observation of scene contexts, our framework first identifies potential interaction targets and forecasts fine-grained future behaviors. We propose a dynamic Graph Convolutional Network (GCN) to effectively capture human-environment relationships. Extensive experiments on challenging real-world benchmarks and live VR environment demonstrate the effectiveness of our approach, achieving superior performance across all metrics and enabling practical applications for proactive VR systems that anticipate user behaviors and adapt virtual environments accordingly.