๐ค AI Summary
Current web-based 3D surface and point cloud visualization tools rely heavily on visual interaction, rendering them inaccessible to blind and low-vision (BLV) users in browser environments. To address this, we propose DIXTRALโthe first browser-native, synchronous multimodal 3D data visualization system designed specifically for BLV users. It integrates data sonification, dynamic textual descriptions, and optional visual feedback, and supports keyboard and game controller input. Its interaction logic was co-designed with BLV stakeholders and refined through iterative user studies. Experimental evaluation demonstrates that DIXTRAL significantly improves BLV usersโ ability to recognize structural patterns in 3D scalar fields, perform spatial orientation, and conduct efficient exploratory analysis. This work contributes a reusable architectural paradigm and empirically grounded design guidelines for inclusive scientific visualization.
๐ Abstract
Blind and low-vision (BLV) users remain largely excluded from three-dimensional (3D) surface and point data visualizations due to the reliance on visual interaction. Existing approaches inadequately support non-visual access, especially in browser-based environments. This study introduces DIXTRAL, a hosted web-native system, co-designed with BLV researchers to address these gaps through multimodal interaction. Conducted with two blind and one sighted researcher, this study took place over sustained design sessions. Data were gathered through iterative testing of the prototype, collecting feedback on spatial navigation, sonification, and usability. Co-design observations demonstrate that synchronized auditory, visual, and textual feedback, combined with keyboard and gamepad navigation, enhances both structure discovery and orientation. DIXTRAL aims to improve access to 3D continuous scalar fields for BLV users and inform best practices for creating inclusive 3D visualizations.