🤖 AI Summary
To address the challenges of open-vocabulary recognition, heterogeneous category distributions, context sensitivity, and spatial-semantic misalignment in detecting interactive graphical elements (IGEs) within stereoscopic 3D VR GUIs, this paper proposes Orienter—the first zero-shot, context-aware IGE detection framework. Orienter introduces a three-stage feedback loop: semantic understanding → candidate proposal generation → interactivity classification. It integrates CLIP-driven zero-shot prompt learning, multimodal vision-language joint modeling, a 3D spatially aware object proposal and classification network, and a human-cognition-inspired iterative reflective verification mechanism. Evaluated on a VR GUI benchmark, Orienter achieves a 23.6% improvement in mean Average Precision (mAP) and 91.4% accuracy in interactivity classification—significantly outperforming state-of-the-art methods—while demonstrating strong generalization to unseen categories and dynamic scenes.
📝 Abstract
In recent years, spatial computing Virtual Reality (VR) has emerged as a transformative technology, offering users immersive and interactive experiences across diversified virtual environments. Users can interact with VR apps through interactable GUI elements (IGEs) on the stereoscopic three-dimensional (3D) graphical user interface (GUI). The accurate recognition of these IGEs is instrumental, serving as the foundation of many software engineering tasks, including automated testing and effective GUI search. The most recent IGE detection approaches for 2D mobile apps typically train a supervised object detection model based on a large-scale manually-labeled GUI dataset, usually with a pre-defined set of clickable GUI element categories like buttons and spinners. Such approaches can hardly be applied to IGE detection in VR apps, due to a multitude of challenges including complexities posed by open-vocabulary and heterogeneous IGE categories, intricacies of context-sensitive interactability, and the necessities of precise spatial perception and visual-semantic alignment for accurate IGE detection results. Thus, it is necessary to embark on the IGE research tailored to VR apps. In this paper, we propose the first zero-shot cOntext-sensitive inteRactable GUI ElemeNT dEtection framework for virtual Reality apps, named Orienter. By imitating human behaviors, Orienter observes and understands the semantic contexts of VR app scenes first, before performing the detection. The detection process is iterated within a feedback-directed validation and reflection loop. Specifically, Orienter contains three components, including (1) Semantic context comprehension, (2) Reflection-directed IGE candidate detection, and (3) Context-sensitive interactability classification. Extensive experiments demonstrate that Orienter is more effective than the state-of-the-art GUI element detection approaches.