Caption: Generating Informative Content Labels for Image Buttons Using Next-Screen Context

๐Ÿ“… 2025-08-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address insufficient screen reader accessibility in mobile applications caused by missing or ambiguous image button labels, this paper proposes an LLM-driven labeling method that incorporates cross-screen contextual information. Unlike conventional approaches relying solely on visual or textual features of the current interface, our method simulates user navigation to the target screen, extracts semantic associations between the source and subsequent interface, and employs multi-stage prompt engineering to jointly model dual-interface information. Experimental results demonstrate that the generated labels significantly outperform both human annotations and baselines based on pure vision or single-interface LLMsโ€”achieving an average 23.6% improvement in accuracy and descriptiveness (p < 0.01). The method provides an automated, interpretable, and deployable technical pathway for accessibility optimization, enabling developers to precisely remediate UI elements and supporting real-time screen reader enhancement.

Technology Category

Application Category

๐Ÿ“ Abstract
We present Caption, an LLM-powered content label generation tool for visual interactive elements on mobile devices. Content labels are essential for screen readers to provide announcements for image-based elements, but are often missing or uninformative due to developer neglect. Automated captioning systems attempt to address this, but are limited to on-screen context, often resulting in inaccurate or unspecific labels. To generate more accurate and descriptive labels, Caption collects next-screen context on interactive elements by navigating to the destination screen that appears after an interaction and incorporating information from both the origin and destination screens. Preliminary results show Caption generates more accurate labels than both human annotators and an LLM baseline. We expect Caption to empower developers by providing actionable accessibility suggestions and directly support on-demand repairs by screen reader users.
Problem

Research questions and friction points this paper is trying to address.

Generates descriptive labels for image buttons
Uses next-screen context for accurate captions
Improves accessibility for screen reader users
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-powered tool for image button labels
Uses next-screen context for accurate captions
Combines origin and destination screen information
๐Ÿ”Ž Similar Papers
No similar papers found.