🤖 AI Summary
This study investigates, for the first time, the capability of vision-language models (VLMs) to simulate visual perception in individuals with low vision. Method: Leveraging a dataset of image descriptions provided by 40 low-vision participants, we propose a personalized prompting strategy that integrates individual visual acuity characteristics with multiple exemplar responses to generate consistent, subject-specific perceptual simulations. Using GPT-4o as the base VLM, we evaluate response consistency via both open-ended and multiple-choice evaluation protocols. Contribution/Results: Baseline consistency (image-only input) is 0.59; incorporating visual acuity information and a single exemplar response significantly improves consistency to 0.70 (p < 0.0001); adding further exemplars yields no additional gain. This work establishes the first VLM-based paradigm for low-vision perceptual modeling and demonstrates that lightweight, personalized prompting substantially enhances simulation fidelity.
📝 Abstract
Advances in vision language models (VLMs) have enabled the simulation of general human behavior through their reasoning and problem solving capabilities. However, prior research has not investigated such simulation capabilities in the accessibility domain. In this paper, we evaluate the extent to which VLMs can simulate the vision perception of low vision individuals when interpreting images. We first compile a benchmark dataset through a survey study with 40 low vision participants, collecting their brief and detailed vision information and both open-ended and multiple-choice image perception and recognition responses to up to 25 images. Using these responses, we construct prompts for VLMs (GPT-4o) to create simulated agents of each participant, varying the included information on vision information and example image responses. We evaluate the agreement between VLM-generated responses and participants' original answers. Our results indicate that VLMs tend to infer beyond the specified vision ability when given minimal prompts, resulting in low agreement (0.59). The agreement between the agent' and participants' responses remains low when only either the vision information (0.59) or example image responses (0.59) are provided, whereas a combination of both significantly increase the agreement (0.70, p < 0.0001). Notably, a single example combining both open-ended and multiple-choice responses, offers significant performance improvements over either alone (p < 0.0001), while additional examples provided minimal benefits (p > 0.05).