🤖 AI Summary
This study addresses critical shortcomings in both conventional product manuals and AI-generated instructions—such as those from ChatGPT and Be-My-AI—in supporting blind users during DIY tasks requiring spatial reasoning. Through in-depth interviews and usability testing, it systematically investigates how blind individuals interact with these resources during assembly, operation, and troubleshooting. The findings reveal that current AI-generated guidance not only fails to compensate for the limitations of traditional documentation but often introduces new barriers through incomplete, incoherent, or misleading information. These insights underscore the urgent need for customized, structured instruction generation tailored to the cognitive and perceptual needs of blind users, offering empirical foundations and design directions for developing more accessible AI systems.
📝 Abstract
AI tools like ChatGPT and Be-My-AI are increasingly being used by blind individuals. Although prior work has explored their use in some Do-It-Yourself (DIY) tasks by blind individuals, little is known about how they use these tools and the available product-manual resources to assemble, operate, and troubleshoot physical or tangible products - tasks requiring spatial reasoning, structural understanding, and precise execution. We address this knowledge gap via an interview study and a usability study with blind participants, investigating how they leverage AI tools and product manuals for DIY tasks with physical products. Findings show that manuals are essential resources, but product-manual instructions are often inadequate for blind users. AI tools presently do not adequately address this insufficiency; in fact, we observed that they often exacerbate this issue with incomplete, incoherent, or misleading guidance. Lastly, we suggest improvements to AI tools for generating tailored instructions for blind users' DIY tasks involving tangible products.