🤖 AI Summary
Current audio screen readers suffer from low information bandwidth and poor interaction efficiency for blind and low-vision (BLV) users on touchscreens, owing to the inherent sequentiality of the auditory channel. To address this, we propose a spatial haptic-augmented screen reading method that encodes interface spatial structure into a bimodal haptic–auditory channel via adaptive spatiotemporal haptic feedback and customizable haptic overlays. Integrated with spatial touch recognition, user behavior awareness, and closed-loop guidance algorithms, the system enables efficient, touchpoint-driven navigation. In a user study with 12 BLV participants, our approach significantly improved task efficiency in data visualization browsing and banking application usage—achieving an average speedup of 37%. Results validate the effectiveness of spatially constrained navigation and haptic guidance, and inform a set of reusable multimodal interaction design principles.
📝 Abstract
Screen readers are audio-based software that Blind and Low Vision (BLV) people use to interact with computing devices, such as tablets and smartphones. Although this technology has significantly improved the accessibility of touchscreen devices, the sequential nature of audio limits the bandwidth of information users can receive and process. We introduce TapNav, an adaptive spatiotactile screen reader prototype developed to interact with touchscreen interfaces spatially. TapNav's screen reader provides adaptive auditory feedback that, in combination with a tactile overlay, conveys spatial information and location of interface elements on-screen. We evaluated TapNav with 12 BLV users who interacted with TapNav to explore a data visualization and interact with a bank transactions application. Our qualitative findings show that touch points and spatially constrained navigation helped users anticipate outcomes for faster exploration, and offload cognitive load to touch. We provide design guidelines for creating tactile overlays for adaptive spatiotactile screen readers and discuss their generalizability beyond our exploratory data analysis and everyday application navigation scenarios.