Investigating the Effect of Encumbrance on Gaze- and Touch-based Target Acquisition on Handheld Mobile Devices

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates, for the first time, how physical load—such as carrying objects or walking—affects multimodal target acquisition performance on handheld mobile devices, with particular focus on the stability and usability of gaze input under such constraints. Method: A controlled experiment with 24 participants compared gaze input (with/without visual feedback), gaze-touch fusion, and single-/dual-hand touch across loaded and unloaded conditions, measuring efficiency, accuracy, and user preference. Contribution/Results: Gaze input—especially with visual feedback—maintained consistent performance under load and significantly outperformed touch in efficiency and user preference during loading, while touch exhibited higher overall accuracy but greater susceptibility to load-induced degradation. The findings demonstrate that situational physical constraints critically influence modality selection, establishing real-time load awareness as a novel, empirically grounded basis for dynamic input modality switching. This work provides both theoretical foundations and practical design principles for robust, context-aware interaction in mobile and wearable computing environments.

Technology Category

Application Category

📝 Abstract
The potential of using gaze as an input modality in the mobile context is growing. While users often encumber themselves by carrying objects and using mobile devices while walking, the impact of encumbrance on gaze input performance remains unexplored. To investigate this, we conducted a user study (N=24) to evaluate the effect of encumbrance on the performance of 1) Gaze using Dwell time (with/without visual feedback), 2) GazeTouch (with/without visual feedback), and 3) One- or two-hand touch input. While Touch generally performed better, Gaze, especially with feedback, showed a consistent performance regardless of whether participants were encumbered or unencumbered. Participants' preferences for input modalities varied with encumbrance: they preferred Gaze when encumbered, and touch when unencumbered. Our findings enhance understanding of the effect of encumbrance on gaze input and contribute towards selecting appropriate input modalities in future mobile user interfaces to account for situational impairments.
Problem

Research questions and friction points this paper is trying to address.

Investigates encumbrance impact on gaze and touch input performance
Compares gaze, gaze-touch, and touch methods on mobile devices
Explores user preference shifts with encumbrance for input selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaze input with visual feedback maintains consistent performance under encumbrance
GazeTouch combines gaze and touch for adaptable mobile interaction
User preference shifts to gaze when encumbered, touch when unencumbered