Characterizing Visual Intents for People with Low Vision through Eye Tracking

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of modeling visual behavior and inferring intent during image browsing by users with low vision. Employing eye-tracking combined with retrospective think-aloud protocols, we conducted a comparative experiment across distinct types of visual impairment. We propose, for the first time, a five-category visual intent taxonomy specifically designed for low-vision users. Through qualitative coding and quantitative analysis, we uncover the synergistic interplay among visual acuity, image context, and oculomotor features—including fixation distribution and scanpath patterns. Results demonstrate that visual impairment type significantly modulates oculomotor correlates across intent categories; moreover, our taxonomy exhibits both interpretability and cross-group stability. This work provides foundational theoretical insights and empirical evidence to guide the development of intent-aware visual assistance technologies.

Technology Category

Application Category

📝 Abstract
Accessing visual information is crucial yet challenging for people with low vision due to their visual conditions (e.g., low visual acuity, limited visual field). However, unlike blind people, low vision people have and prefer using their functional vision in daily tasks. Gaze patterns thus become an important indicator to uncover their visual challenges and intents, inspiring more adaptive visual support. We seek to deeply understand low vision users' gaze behaviors in different image viewing tasks, characterizing typical visual intents and the unique gaze patterns exhibited by people with different low vision conditions. We conducted a retrospective think-aloud study using eye tracking with 14 low vision participants and nine sighted controls. Participants completed various image viewing tasks and watched the playback of their gaze trajectories to reflect on their visual experiences. Based on the study, we derived a visual intent taxonomy with five intents characterized by participants' gaze behaviors and demonstrated how low vision conditions affect gaze patterns across visual intents. Our findings underscore the importance of combining visual ability information, image context, and eye tracking data in visual intent recognition, setting up a foundation for intent-aware assistive technologies for low vision.
Problem

Research questions and friction points this paper is trying to address.

Visual Impairment
Visual Behavior
Assistive Technology
Innovation

Methods, ideas, or system contributions that make the work stand out.

Eye Tracking Technology
Visual Impairment
Personalized Visual Assistance
🔎 Similar Papers
No similar papers found.
R
Ru Wang
University of Wisconsin-Madison, Madison, WI, USA
Ruijia Chen
Ruijia Chen
University of Wisconsin Madison
Human Computer InteractionAugmented RealityVirtual RealityAccessibility
A
Anqiao Erica Cai
University of Illinois, Urbana-Champaign, Champaign, IL, USA
Z
Zhiyuan Li
University of Wisconsin-Madison, Madison, WI, USA
S
Sanbrita Mondal
University of Wisconsin-Madison, Madison, USA
Y
Yuhang Zhao
University of Wisconsin-Madison, Madison, WI, USA