A Hands-free Spatial Selection and Interaction Technique using Gaze and Blink Input with Blink Prediction for Extended Reality

📅 2025-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high false-trigger rates and difficulties in continuous interaction during gaze-based UI navigation in hand-constrained XR environments, this paper proposes a hands-free spatial interaction technique. The method employs gaze for target localization and—novelty as the primary selection trigger—active blinking, complemented by head motion for continuous operations such as scrolling and dragging. To enhance robustness, we innovatively integrate an LSTM-based temporal model with a blink-intention classification neural network, effectively distinguishing intentional blinks from spontaneous ones. Experimental evaluation on realistic UI tasks demonstrates that the proposed approach achieves selection speed comparable to Gaze+Pinch, reduces false-trigger rates by 62%, and significantly improves subjective user comfort and perceived usability.

Technology Category

Application Category

📝 Abstract
Gaze-based interaction techniques have created significant interest in the field of spatial interaction. Many of these methods require additional input modalities, such as hand gestures (e.g., gaze coupled with pinch). Those can be uncomfortable and difficult to perform in public or limited spaces, and pose challenges for users who are unable to execute pinch gestures. To address these aspects, we propose a novel, hands-free Gaze+Blink interaction technique that leverages the user's gaze and intentional eye blinks. This technique enables users to perform selections by executing intentional blinks. It facilitates continuous interactions, such as scrolling or drag-and-drop, through eye blinks coupled with head movements. So far, this concept has not been explored for hands-free spatial interaction techniques. We evaluated the performance and user experience (UX) of our Gaze+Blink method with two user studies and compared it with Gaze+Pinch in a realistic user interface setup featuring common menu interaction tasks. Study 1 demonstrated that while Gaze+Blink achieved comparable selection speeds, it was prone to accidental selections resulting from unintentional blinks. In Study 2 we explored an enhanced technique employing a deep learning algorithms for filtering out unintentional blinks.
Problem

Research questions and friction points this paper is trying to address.

Eye-Gesture Interaction
Continuous Operation
Augmented Reality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaze-and-Blink Interaction
Manual-Free Spatial Manipulation
Predictive Blink Intent Algorithm
🔎 Similar Papers
No similar papers found.