Benchmarking PDF Accessibility Evaluation A Dataset and Framework for Assessing Automated and LLM-Based Approaches for Accessibility Testing

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Academic PDF accessibility evaluation for visually impaired users lacks standardized benchmarks and systematic assessment methodologies. Method: We introduce the first expert-annotated benchmark dataset for academic PDF accessibility, covering seven critical criteria—including alternative text quality and logical reading order—and propose a four-category fine-grained labeling framework. We establish the first standardized PDF accessibility evaluation framework, integrating rule-based tools, large language models (GPT-4-Turbo), and human annotation in multi-dimensional comparative experiments. Contribution/Results: GPT-4-Turbo achieves an overall accuracy of 0.85, significantly outperforming traditional tools in semantic plausibility judgment; however, its performance remains suboptimal for “not provided” and “indeterminate” categories. Our study advocates a hybrid evaluation paradigm—combining rules, LLMs, and human expertise—to advance automated PDF accessibility assessment, providing both a new benchmark and a methodological foundation for future research.

Technology Category

Application Category

📝 Abstract
PDFs remain the dominant format for scholarly communication, despite significant accessibility challenges for blind and low-vision users. While various tools attempt to evaluate PDF accessibility, there is no standardized methodology to evaluate how different accessibility assessment approaches perform. Our work addresses this critical gap by introducing a novel benchmark dataset of scholarly PDFs with expert-validated accessibility annotations across seven criteria (alternative text quality, logical reading order, semantic tagging, table structure, functional hyperlinks, color contrast, and font readability), and a four-category evaluation framework with standardized labels (Passed, Failed, Not Present, Cannot Tell) to systematically assess accessibility evaluation approaches. Using our evaluation framework, we explore whether large language models (LLMs) are capable of supporting automated accessibility evaluation. We benchmark five LLMs, which demonstrate varying capabilities in correctly assessing different accessibility criteria, with GPT-4-Turbo achieving the highest overall accuracy (0.85). However, all models struggled in correctly categorizing documents with Not Present and Cannot Tell accessibility labels, particularly for alt text quality assessment. Our qualitative comparison with standard automated checkers reveals complementary strengths: rule-based tools excel at technical verification, while LLMs better evaluate semantic appropriateness and contextual relevance. Based on our findings, we propose a hybrid approach that would combine automated checkers, LLM evaluation, and human assessment as a future strategy for PDF accessibility evaluation.
Problem

Research questions and friction points this paper is trying to address.

Lack of standardized methodology for evaluating PDF accessibility assessment approaches
Need for expert-validated benchmark dataset to test accessibility evaluation methods
Exploring LLM capabilities versus traditional tools for automated PDF accessibility testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created benchmark dataset with expert-validated PDF accessibility annotations
Developed four-category evaluation framework for systematic accessibility assessment
Proposed hybrid approach combining automated checkers, LLMs, and human evaluation
🔎 Similar Papers
No similar papers found.
A
Anukriti Kumar
University of Washington, Seattle, WA, USA
T
Tanushree Padath
University of Washington, Seattle, WA, USA
Lucy Lu Wang
Lucy Lu Wang
University of Washington; Allen Institute for AI (Ai2)
health informaticsnatural language processingscience communicationopen access