π€ AI Summary
This study addresses the pervasive hallucination issues in current generative vision-language models (VLMs) when performing page-level semantic understanding of comics, which critically hinders effective access for visually impaired users. The work presents the first systematic taxonomy of hallucinations specific to comic understanding and introduces a novel evaluation benchmark tailored for accessibility applications. Through human-in-the-loop analysis, it reveals significant deficiencies in existing modelsβ semantic coherence and contextual reasoning capabilities, while also highlighting the inadequacy of relying solely on semantic similarity metrics for evaluation. Building on these insights, the authors propose targeted data refinement and hallucination mitigation strategies, laying a foundational framework for developing reliable and interpretable VLMs that support accessible comic comprehension.
π Abstract
A system that enables blind or visually impaired users to access comics/manga would introduce a new medium of storytelling to this community. However, no such system currently exists. Generative vision-language models (VLMs) have shown promise in describing images and understanding comics, but most research on comic understanding is limited to panel-level analysis. To fully support blind and visually impaired users, greater attention must be paid to page-level understanding and interpretation. In this work, we present a preliminary benchmark of VLM performance on comic interpretation tasks. We identify and categorize hallucinations that emerge during this process, organizing them into generalized object-hallucination taxonomies. We conclude with guidance on future research, emphasizing hallucination mitigation and improved data curation for comic interpretation.