๐ค AI Summary
This work addresses the critical bottleneck in Spoken Visual Question Answering (SVQA)โthe lack of authentic multimodal (text + image + speech) datasetsโby proposing the first end-to-end SVQA framework. Methodologically: (1) it introduces zero-shot text-to-speech (TTS) synthesis (using VITS/Coqui TTS) to generate scalable, high-fidelity speech modalities for training; (2) it designs a tri-stream encoder architecture integrating textual, visual, and acoustic representations, enabling effective cross-modal alignment and fusion. Contributions include: the first formal task definition of SVQA; elimination of reliance on costly human speech annotation; a model trained solely on synthetic speech achieves 98.3% of the performance upper bound attained under full text supervision; and minimal sensitivity to TTS model choice (ยฑ0.4% variance), demonstrating strong robustness. This work establishes a new paradigm for low-resource multimodal question answering.
๐ Abstract
Question answering (QA) systems are designed to answer natural language questions. Visual QA (VQA) and Spoken QA (SQA) systems extend the textual QA system to accept visual and spoken input respectively. This work aims to create a system that enables user interaction through both speech and images. That is achieved through the fusion of text, speech, and image modalities to tackle the task of spoken VQA (SVQA). The resulting multi-modal model has textual, visual, and spoken inputs and can answer spoken questions on images. Training and evaluating SVQA models requires a dataset for all three modalities, but no such dataset currently exists. We address this problem by synthesizing VQA datasets using two zero-shot TTS models. Our initial findings indicate that a model trained only with synthesized speech nearly reaches the performance of the upper-bounding model trained on textual QAs. In addition, we show that the choice of the TTS model has a minor impact on accuracy.