🤖 AI Summary
Existing extended reality (XR) systems often rely on cloud-based AI or gaze-based interaction, suffering from privacy risks, high latency, and ambiguity in target selection. This work proposes a novel on-device, click-based interaction framework in which users precisely select real-world objects via a controller to trigger local multimodal reasoning by an on-device vision-language model, which then answers natural language queries through text and speech. By pioneering the integration of controller-based pointing with on-device vision-language models, this approach significantly enhances interaction precision, privacy preservation, and system transparency. Implemented via ONNX and integrated into the Magic Leap SDK (C API), user studies demonstrate that the system maintains acceptable latency while achieving high usability, trust, and user satisfaction, underscoring its potential for trustworthy XR interactions.
📝 Abstract
We present ClickAIXR, a novel on-device framework for multimodal vision-language interaction with objects in extended reality (XR). Unlike prior systems that rely on cloud-based AI (e.g., ChatGPT) or gaze-based selection (e.g., GazePointAR), ClickAIXR integrates an on-device vision-language model (VLM) with a controller-based object selection paradigm, enabling users to precisely click on real-world objects in XR. Once selected, the object image is processed locally by the VLM to answer natural language questions through both text and speech. This object-centered interaction reduces ambiguity inherent in gaze- or voice-only interfaces and improves transparency by performing all inference on-device, addressing concerns around privacy and latency. We implemented ClickAIXR in the Magic Leap SDK (C API) with ONNX-based local VLM inference. We conducted a user study comparing ClickAIXR with Gemini 2.5 Flash and ChatGPT 5, evaluating usability, trust, and user satisfaction. Results show that latency is moderate and user experience is acceptable. Our findings demonstrate the potential of click-based object selection combined with on-device AI to advance trustworthy, privacy-preserving XR interactions. The source code and supplementary materials are available at: nanovis.org/ClickAIXR.html