🤖 AI Summary
Existing assistive technologies often fail to address the diverse needs of people with visual impairments, and do-it-yourself (DIY) assistive tools are frequently inaccessible to non-expert users due to their reliance on specialized technical skills. This work proposes a novel approach that positions large language models (LLMs) as co-constructive collaborators for visually impaired individuals in the physical creation of DIY assistive devices, integrating accessible human-computer interaction design to enable participation by users without programming backgrounds. The study uncovers key design opportunities—such as spatial visual support and strategies for handling AI errors—and identifies core challenges and innovative pathways in AI-assisted DIY processes. These findings provide empirical grounding and actionable design directions for developing more inclusive and accessible AI-driven assistive technologies.
📝 Abstract
Existing assistive technologies (AT) often adopt a one-size-fits-all approach, overlooking the diverse needs of people with visual impairments (PVI). Do-it-yourself AT (DIY-AT) toolkits offer one path toward customization, but most remain limited--targeting co-design with engineers or requiring programming expertise. Non-professionals with disabilities, including PVI, also face barriers such as inaccessible tools, lack of confidence, and insufficient technical knowledge. These gaps highlight the need for prototyping technologies that enable PVI to directly make their own AT. Building on emerging evidence that large language models (LLMs) can serve not only as visual aids but also as co-design partners, we present an exploratory study of how LLM-based AI can support PVI in the tangible DIY-AT co-making process. Our findings surface key challenges and design opportunities: the need for greater spatial and visual support, strategies for mitigating novel AI errors, and implications for designing more accessible AI-assisted prototypes.