🤖 AI Summary
This study addresses input ambiguity in augmented reality (AR) caused by distant targets or cluttered scenes by proposing an uncertainty-aware feedforward visualization technique that disambiguates multiple candidate targets through distinct visual identities (e.g., color) or modulated visual salience (e.g., transparency). Based on a systematic review of 30 years of literature, the work establishes the first pointer space encompassing 25 distinct pointer designs and introduces a context-aware pointer design framework tailored for AR. Two online user studies (n=60 and n=40) evaluated user preferences, confidence, cognitive load, and target identifiability, leading to concrete pointer design recommendations for varying object distances and scene densities. The proposed approach significantly improves accuracy, efficiency, and user confidence in AR target selection tasks.
📝 Abstract
Target disambiguation is crucial in resolving input ambiguity in augmented reality (AR), especially for queries over distant objects or cluttered scenes on the go. Yet, visual feedforward techniques that support this process remain underexplored. We present Uncertain Pointer, a systematic exploration of feedforward visualizations that annotate multiple candidate targets before user confirmation, either by adding distinct visual identities (e.g., colors) to support disambiguation or by modulating visual intensity (e.g., opacity) to convey system uncertainty. First, we construct a pointer space of 25 pointers by analyzing existing placement strategies and visual signifiers used in target visualizations across 30 years of relevant literature. We then evaluate them through two online experiments (n = 60 and 40), measuring user preference, confidence, mental ease, target visibility, and identifiability across varying object distances and sparsities. Finally, from the results, we derive design recommendations in choosing different Uncertain Pointers based on AR context and disambiguation techniques.