Deaf and Hard of Hearing Access to Intelligent Personal Assistants: Comparison of Voice-Based Options with an LLM-Powered Touch Interface

📅 2026-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the persistent challenge of intelligent personal assistants (IPAs) failing to recognize the diverse speech patterns of deaf and hard-of-hearing (DHH) individuals, which significantly limits their voice-based accessibility. To overcome this barrier, the work proposes a novel context-aware touch interface grounded in large language models (LLMs), which generates task-oriented prompts by leveraging user history and environmental context as an alternative to speech recognition. Implemented on the Amazon Echo Show platform, the system integrates automatic speech recognition (ASR), a Wizard-of-Oz paradigm, and LLM-driven prompting. A mixed-methods evaluation comparing native speech, human-transcribed input, and LLM-augmented touch interaction reveals that the touch-based approach matches voice input in usability, while user preferences exhibit notable diversity. These findings underscore the critical need for native speech support for DHH users and open new avenues for accessible human-computer interaction.

Technology Category

Application Category

📝 Abstract
We investigate intelligent personal assistants (IPAs) accessibility for deaf and hard of hearing (DHH) people who can use their voice in everyday communication. The inability of IPAs to understand diverse accents including deaf speech renders them largely inaccessible to non-signing and speaking DHH individuals. Using an Echo Show, we compare the usability of natural language input via spoken English; with Alexa's automatic speech recognition and a Wizard-of-Oz setting with a trained facilitator re-speaking commands against that of a large language model (LLM)-assisted touch interface in a mixed-methods study. The touch method was navigated through an LLM-powered"task prompter,"which integrated the user's history and smart environment to suggest contextually-appropriate commands. Quantitative results showed no significant differences across both spoken English conditions vs LLM-assisted touch. Qualitative results showed variability in opinions on the usability of each method. Ultimately, it will be necessary to have robust deaf-accented speech recognized natively by IPAs.
Problem

Research questions and friction points this paper is trying to address.

deaf speech
intelligent personal assistants
speech recognition
accessibility
accent variability
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-powered touch interface
deaf speech recognition
intelligent personal assistants accessibility
task prompter
mixed-methods evaluation
🔎 Similar Papers
No similar papers found.
P
Paige S Devries
Gallaudet University, USA
M
Michaela Okosi
Gallaudet University, USA
M
Ming Li
Gallaudet University, USA
N
Nora Dunphy
University of California, Berkeley, USA
G
Gidey W. Gezae
Pennsylvania State University, USA
D
Dante Conway
Gallaudet University, USA
Abraham Glasser
Abraham Glasser
Assistant Professor, Gallaudet University
R
Raja S. Kushalnagar
Gallaudet University, USA
Christian Vogler
Christian Vogler
Associate Professor and Director, Technology Access Program, Gallaudet University
Sign Language RecognitionFace TrackingGesture RecognitionTelecommunications Access