🤖 AI Summary
Geographic visualizations remain inaccessible to screen reader users, impeding their access to spatial information. This work proposes the first natural language question-answering system for geographic visualization tailored to blind and visually impaired individuals, leveraging large language models to integrate geospatial, visual, and contextual semantic understanding. The system supports diverse query types, including map reading, analysis, interpretation, and navigation. User studies demonstrate its effectiveness in bridging the accessibility gap. Furthermore, the project openly releases the system implementation, interaction behavior logs, and a geospatial query dataset, revealing distinct interaction patterns across user groups and establishing a new paradigm for accessible geographic information interaction.
📝 Abstract
Geovisualizations are powerful tools for communicating spatial information, but are inaccessible to screen-reader users. To address this limitation, we present GeoVisA11y, an LLM-based question-answering system that makes geovisualizations accessible through natural language interaction. The system supports map reading, analysis, interpretation and navigation by handling analytical, geospatial, visual and contextual queries. Through user studies with 12 screen-reader users and sighted participants, we demonstrate that GeoVisA11y effectively bridges accessibility gaps while revealing distinct interaction patterns between user groups. We contribute: (1) an open-source, accessible geovisualization system, (2) empirical findings on query and navigation differences, and (3) a dataset of geospatial queries to inform future research on accessible data visualization.