π€ AI Summary
Web accessibility issues are often identified late in development, leading to high remediation costs. Method: This work proposes a βshift-leftβ approach by designing and implementing a VS Code extension that deeply integrates large language models (LLMs) to perform real-time static analysis of HTML, CSS, and ARIA code during coding, detecting accessibility violations and generating actionable, production-ready fixes. Contribution/Results: It is the first to embed LLMs natively into the IDE workflow for accessibility assurance, leveraging domain-specific prompt engineering and rule-based parsing to enhance semantic understanding. Evaluation shows the plugin generates highly accurate, executable repair code; however, detection accuracy on structurally complex pages remains an area for improvement. By moving accessibility validation from testing into the coding phase, this approach significantly reduces the risk of accessibility defects persisting into production.
π Abstract
Achieving web accessibility is essential to building inclusive digital experiences. However, accessibility issues are often identified only after a website has been fully developed, making them difficult to address. This paper introduces a Visual Studio Code plugin that integrates calls to a Large Language Model (LLM) to assist developers in identifying and resolving accessibility issues within the IDE, reducing accessibility defects that might otherwise reach the production environment. Our evaluation shows promising results: the plugin effectively generates functioning fixes for accessibility issues when the errors are correctly detected. However, detecting errors using a generic prompt-designed for broad applicability across various code structures-remains challenging and limited in accuracy.