🤖 AI Summary
Blind and low-vision (BLV) professionals face systemic exclusion from visually oriented occupational tasks due to inaccessible visual interfaces and entrenched societal bias.
Method: This paper proposes a generative AI–based framework for contextualized, personalized visual description, integrating context-aware computing with human–AI interaction design to support professional activities—including content creation, peer review, and information consumption—while dynamically adapting to task-specific contexts and modeling individual user preferences.
Contribution/Results: It establishes the first systematic design paradigm for AI-powered visual description explicitly oriented toward professional inclusion. Empirical evaluation demonstrates significant improvements in BLV users’ autonomy and decision-making capacity during visual tasks, enhanced work independence, and measurable progress toward equitable workplace participation and sustained professional development. The framework bridges critical gaps between assistive technology and domain-specific professional practice, advancing inclusive AI design principles for visual accessibility in high-stakes occupational settings.
📝 Abstract
Many blind and low vision (BLV) people are excluded from professional roles that may involve visual tasks due to access barriers and persisting stigmas. Advancing generative AI systems can support BLV people through providing contextual and personalized visual descriptions for creation, critique, and consumption. In this workshop paper, we provide design suggestions for how visual descriptions can be better contextualized for multiple professional tasks. We conclude by discussing how these designs can improve autonomy, inclusion, and skill development over time.