đ¤ AI Summary
This study investigates how AI-driven visual interventionsâcategorized as Inform, Nudge, Recommend, and Instructâaffect user decision-making in XR environments (implemented on Meta Quest Pro using 360° supermarket video). It specifically examines the interaction between intervention intensity and perceived user autonomy. A mixed-methods approach combines quantitative behavioral analysis with semi-structured interviews to assess shifts in preference, decision efficiency, and trust. The study proposes a novel XR-AI collaborative decision-making framework grounded in three principles: autonomy preservation, transparency enhancement, and contextual adaptability. Results indicate that moderate-intensity interventionsâparticularly Nudge and Recommendâbest balance guidance efficacy with user agency. Users significantly prefer systems that transparently disclose AI intent and support counterfactual adjustments. These findings provide empirical evidence and actionable design guidelines for developing trustworthy, human-centered AI visualizations in XR.
đ Abstract
The integration of extended reality (XR) with artificial intelligence (AI) introduces a new paradigm for user interaction, enabling AI to perceive user intent, stimulate the senses, and influence decision-making. We explored the impact of four AI-driven visualisation techniques -- `Inform,' `Nudge,' `Recommend,' and `Instruct' -- on user decision-making in XR using the Meta Quest Pro. To test these techniques, we used a pre-recorded 360-degree video of a supermarket, overlaying each technique through a virtual interface. We aimed to investigate how these different visualisation techniques with different levels of user autonomy impact preferences and decision-making. An exploratory study with semi-structured interviews provided feedback and design recommendations. Our findings emphasise the importance of maintaining user autonomy, enhancing AI transparency to build trust, and considering context in visualisation design.