🤖 AI Summary
To address the fundamental limitation of out-of-vocabulary (OOV) word input in eyes-free scenarios—such as SmartTV and VR—where conventional dictionary-dependent methods fail, this paper proposes DuSK: a dictionary-free, dual-smartphone, stroke-driven, single-key text entry technique. Its core innovation is the first-ever unambiguous stroke encoding scheme, which maps handwritten strokes on a touchpad directly to characters without lexical constraints. Leveraging ergonomic modeling and iterative interaction design, DuSK enables deterministic, eyes-free decoding. Evaluation shows an initial input speed of 10 WPM, improving to 13 WPM after brief training—significantly outperforming commercial cursor-based input (8 WPM) and matching the performance of state-of-the-art dictionary-dependent methods, while fully supporting arbitrary vocabulary entry. DuSK breaks the longstanding dependency of eyes-free text input on pre-defined lexicons, achieving a unique balance of speed, openness, and practical usability.
📝 Abstract
Given the ubiquity of SmartTVs and head-mounted-display-based virtual environments, recent research has explored techniques to support eyes-free text entry using touchscreen devices. However, proposed techniques, leveraging lexicons, limit the user's ability to enter out-of-vocabulary words. In this paper, we investigate how to enter text while relying on unambiguous input to support out-of-vocabulary words. Through an iterative design approach, and after a careful investigation of actions that can be accurately and rapidly performed eyes-free, we devise DuSK, a {Du}al-handed, {S}troke-based, 1{K}eyboarding technique. In a controlled experiment, we show initial speeds of 10 WPM steadily increasing to 13~WPM with training. DuSK outperforms the common cursor-based text entry technique widely deployed in commercial SmartTVs (8 WPM) and is comparable to other eyes-free lexicon-based techniques, but with the added benefit of supporting out-of-vocabulary word input.