ClickAIXR: On-Device Multimodal Vision-Language Interaction with Real-World Objects in Extended Reality

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing extended reality (XR) systems often rely on cloud-based AI or gaze-based interaction, suffering from privacy risks, high latency, and ambiguity in target selection. This work proposes a novel on-device, click-based interaction framework in which users precisely select real-world objects via a controller to trigger local multimodal reasoning by an on-device vision-language model, which then answers natural language queries through text and speech. By pioneering the integration of controller-based pointing with on-device vision-language models, this approach significantly enhances interaction precision, privacy preservation, and system transparency. Implemented via ONNX and integrated into the Magic Leap SDK (C API), user studies demonstrate that the system maintains acceptable latency while achieving high usability, trust, and user satisfaction, underscoring its potential for trustworthy XR interactions.
📝 Abstract
We present ClickAIXR, a novel on-device framework for multimodal vision-language interaction with objects in extended reality (XR). Unlike prior systems that rely on cloud-based AI (e.g., ChatGPT) or gaze-based selection (e.g., GazePointAR), ClickAIXR integrates an on-device vision-language model (VLM) with a controller-based object selection paradigm, enabling users to precisely click on real-world objects in XR. Once selected, the object image is processed locally by the VLM to answer natural language questions through both text and speech. This object-centered interaction reduces ambiguity inherent in gaze- or voice-only interfaces and improves transparency by performing all inference on-device, addressing concerns around privacy and latency. We implemented ClickAIXR in the Magic Leap SDK (C API) with ONNX-based local VLM inference. We conducted a user study comparing ClickAIXR with Gemini 2.5 Flash and ChatGPT 5, evaluating usability, trust, and user satisfaction. Results show that latency is moderate and user experience is acceptable. Our findings demonstrate the potential of click-based object selection combined with on-device AI to advance trustworthy, privacy-preserving XR interactions. The source code and supplementary materials are available at: nanovis.org/ClickAIXR.html
Problem

Research questions and friction points this paper is trying to address.

extended reality
vision-language interaction
on-device AI
object selection
privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

on-device AI
vision-language model
extended reality
object selection
privacy-preserving interaction
🔎 Similar Papers
No similar papers found.
D
Dawar Khan
King Abdullah University of Science and Technology (KAUST), Saudi Arabia
A
Alexandre Kouyoumdjian
King Abdullah University of Science and Technology (KAUST), Saudi Arabia
X
Xinyu Liu
King Abdullah University of Science and Technology (KAUST), Saudi Arabia
O
Omar Mena
King Abdullah University of Science and Technology (KAUST), Saudi Arabia
Dominik Engel
Dominik Engel
King Abdullah University of Science and Technology
Deep learningrenderingcomputer graphicscomputer vision
Ivan Viola
Ivan Viola
King Abdullah University of Science and Technology (KAUST), Saudi Arabia
computer graphicsvisualizationillustrative visualizationmolecular visualization