Say It My Way: Exploring Control in Conversational Visual Question Answering with Blind Users

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the rigid interaction paradigms and lack of user-customizable control in existing visual question answering (VQA) systems for blind individuals, which hinder real-world usability. Through a systematic analysis of 418 authentic conversational VQA interactions involving 11 blind participants, complemented by prompt engineering, qualitative interviews, and log analysis, the work uncovers user-invented prompting strategies that enhance efficiency. Findings reveal that users typically engage in three-turn dialogues (up to 21 turns) and input text averaging only one-tenth the length of system responses. The research identifies critical system shortcomings, including redundant replies, inaccurate spatiotemporal estimation, poor understanding of image composition, and inadequate camera guidance. Based on these insights, the paper proposes novel interaction design directions at both query and system levels and releases the first public dataset capturing such user behaviors.

Technology Category

Application Category

📝 Abstract
Prompting and steering techniques are well established in general-purpose generative AI, yet assistive visual question answering (VQA) tools for blind users still follow rigid interaction patterns with limited opportunities for customization. User control can be helpful when system responses are misaligned with their goals and contexts, a gap that becomes especially consequential for blind users that may rely on these systems for access. We invite 11 blind users to customize their interactions with a real-world conversational VQA system. Drawing on 418 interactions, reflections, and post-study interviews, we analyze prompting-based techniques participants adopted, including those introduced in the study and those developed independently in real-world settings. VQA interactions were often lengthy: participants averaged 3 turns, sometimes up to 21, with input text typically tenfold shorter than the responses they heard. Built on state-of-the-art LLMs, the system lacked verbosity controls, was limited in estimating distance in space and time, relied on inaccessible image framing, and offered little to no camera guidance. We discuss how customization techniques such as prompt engineering can help participants work around these limitations. Alongside a new publicly available dataset, we offer insights for interaction design at both query and system levels.
Problem

Research questions and friction points this paper is trying to address.

conversational visual question answering
blind users
user control
customization
assistive AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

conversational VQA
prompt engineering
user control
accessibility
blind users
🔎 Similar Papers
No similar papers found.