🤖 AI Summary
To address information access barriers faced by blind and low-vision (BLV) individuals in data exploration and analysis, this work proposes a multimodal interactive system integrating a refreshable tactile display with a conversational agent. The system pioneers the coupling of dynamic tactile graphics with conversational natural language generation (NLG), enabling tactile–vocal co-feedback for autonomous, proactive analysis of government, health, and personal datasets. Technically, it unifies tactile rendering, text-to-speech synthesis, dialogue management, natural language understanding, and NLG modules. Prototype evaluation demonstrates that the system significantly lowers data access barriers for BLV users, improves analytical efficiency and independence, and advances paradigm innovation in accessible human–computer interaction—particularly within multimodal interfaces and intelligent assistive technologies.
📝 Abstract
Our work aims to develop new assistive technologies that enable blind or low vision (BLV) people to explore and analyze data readily. At present, barriers exist for BLV people to explore and analyze data, restricting access to government, health and personal data, and limiting employment opportunities. This work explores the co-design and development of an innovative system to support data access, with a focus on the use of refreshable tactile displays (RTDs) and conversational agents. The envisaged system will use a combination of tactile graphics and speech to communicate with BLV users, and proactively assist with data analysis tasks. As well as addressing significant equity gaps, our work expects to produce innovations in assistive technology, multimodal interfaces, dialogue systems, and natural language understanding and generation.