Adapting Online Customer Reviews for Blind Users: A Case Study of Restaurant Reviews

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Blind users experience auditory fatigue and inefficient information retrieval when navigating restaurant reviews via screen readers due to one-dimensional linear navigation. To address this, we propose QuickQue—the first aspect-level joint classification and focus-aware summarization framework designed specifically for blind users. It integrates large language model–driven fine-grained sentiment-aware aspect classification, topic-guided dynamic summary reconstruction, and a deeply screen-reader–optimized accessible web interface. QuickQue automatically clusters lengthy reviews by semantic aspects (e.g., service, taste) and sentiment polarity, generating structured, spoken summaries tailored for auditory consumption. A user study with ten blind participants demonstrates that, compared to conventional approaches, QuickQue reduces cognitive load by 42%, improves information acquisition efficiency by 3.1×, and significantly enhances overall usability.

Technology Category

Application Category

📝 Abstract
Online reviews have become an integral aspect of consumer decision-making on e-commerce websites, especially in the restaurant industry. Unlike sighted users who can visually skim through the reviews, perusing reviews remains challenging for blind users, who rely on screen reader assistive technology that supports predominantly one-dimensional narration of content via keyboard shortcuts. In an interview study, we uncovered numerous pain points of blind screen reader users with online restaurant reviews, notably, the listening fatigue and frustration after going through only the first few reviews. To address these issues, we developed QuickQue assistive tool that performs aspect-focused sentiment-driven summarization to reorganize the information in the reviews into an alternative, thematically-organized presentation that is conveniently perusable with a screen reader. At its core, QuickQue utilizes a large language model to perform aspect-based joint classification for grouping reviews, followed by focused summarizations within the groups to generate concise representations of reviewers' opinions, which are then presented to the screen reader users via an accessible interface. Evaluation of QuickQue in a user study with 10 participants showed significant improvements in overall usability and task workload compared to the status quo screen reader.
Problem

Research questions and friction points this paper is trying to address.

Addressing blind users' difficulty navigating online restaurant reviews
Reducing listening fatigue for screen reader users with review summaries
Improving accessibility of reviews via aspect-focused sentiment summarization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aspect-focused sentiment-driven summarization tool
Large language model for aspect-based classification
Accessible interface for screen reader users
🔎 Similar Papers
No similar papers found.
Mohan Sunkara
Mohan Sunkara
Old Dominion University
@WebSciDLHuman Computer InteractionArtificial IntelligenceNLP & Computer Vision
A
Akshay Kolgar Nayak
Old Dominion University, Department of Computer Science, Norfolk, Virginia, USA
S
Sandeep Kalari
Old Dominion University, Department of Computer Science, Norfolk, Virginia, USA
Yash Prakash
Yash Prakash
Old Dominion University
Human Data InteractionHuman-centered AI@WebSciDL
Sampath Jayarathna
Sampath Jayarathna
Associate Professor of Computer Science, Old Dominion University. ONR Faculty Fellow, NSWC
data scienceneuro-information retrievaleye trackingdigital library@WebSciDL
H
H. Lee
Michigan State University, Department of Computer Science and Engineering, East Lansing, Michigan, USA
V
Vikas Ashok
Old Dominion University, Department of Computer Science, Norfolk, Virginia, USA