🤖 AI Summary
Image quality degradations—such as blur and poor composition—significantly impair the accuracy of visual language models (VLMs) in generating everyday object descriptions for blind and low-vision (BLV) users, yet prior evaluations lack disability-informed benchmarks.
Method: We conducted field studies with 86 BLV participants and controlled experiments to establish the first inclusive evaluation framework grounded in authentic user information needs, integrating quantitative performance metrics with qualitative user feedback across diverse image degradation conditions.
Contribution/Results: State-of-the-art VLMs achieve 98% recognition accuracy on high-quality images but drop sharply to 75% under blur or compositional flaws; accuracy further deteriorates when degradations co-occur. Our analysis exposes critical accessibility gaps in current VLMs, proposes an accessibility-centered evaluation paradigm, and outlines concrete model optimization directions—advancing VLM development toward equitable, inclusive design.
📝 Abstract
Vision-Language Models (VLMs) are increasingly used by blind and low-vision (BLV) people to identify and understand products in their everyday lives, such as food, personal products, and household goods. Despite their prevalence, we lack an empirical understanding of how common image quality issues, like blur and misframing of items, affect the accuracy of VLM-generated captions and whether resulting captions meet BLV people's information needs. Grounded in a survey with 86 BLV people, we systematically evaluate how image quality issues affect captions generated by VLMs. We show that the best model recognizes products in images with no quality issues with 98% accuracy, but drops to 75% accuracy overall when quality issues are present, worsening considerably as issues compound. We discuss the need for model evaluations that center on disabled people's experiences throughout the process and offer concrete recommendations for HCI and ML researchers to make VLMs more reliable for BLV people.