Evaluating Repository-level Software Documentation via Question Answering and Feature-Driven Development

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches to software documentation evaluation lack comprehensive metrics for repository-level documentation and often rely on subjective or unreliable assessment mechanisms. This work proposes SWD-Bench, a novel benchmark that introduces an indirect evaluation paradigm inspired by documentation-driven development. It simulates realistic development scenarios through three function-driven question-answering tasks: functionality detection, localization, and implementation completion. The benchmark encompasses 4,170 repository-level samples and integrates large language models, pull request mining, and context-augmentation techniques to construct a robust evaluation framework. Experimental results reveal critical limitations in current documentation generation methods, demonstrate the complementary relationship between code and documentation, and show that high-quality documentation improves the problem-solving success rate of SWE-Agent by 20.00%, underscoring its practical value in real-world software development.
📝 Abstract
Software documentation is crucial for repository comprehension. While Large Language Models (LLMs) advance documentation generation from code snippets to entire repositories, existing benchmarks have two key limitations: (1) they lack a holistic, repository-level assessment, and (2) they rely on unreliable evaluation strategies, such as LLM-as-a-judge, which suffers from vague criteria and limited repository-level knowledge. To address these issues, we introduce SWD-Bench, a novel benchmark for evaluating repository-level software documentation. Inspired by documentation-driven development, our strategy evaluates documentation quality by assessing an LLM's ability to understand and implement functionalities using the documentation, rather than by directly scoring it. This is measured through function-driven Question Answering (QA) tasks. SWD-Bench comprises three interconnected QA tasks: (1) Functionality Detection, to determine if a functionality is described; (2) Functionality Localization, to evaluate the accuracy of locating related files; and (3) Functionality Completion, to measure the comprehensiveness of implementation details. We construct the benchmark, containing 4,170 entries, by mining high-quality Pull Requests and enriching them with repository-level context. Experiments reveal limitations in current documentation generation methods and show that source code provides complementary value. Notably, documentation from the best-performing method improves the issue-solving rate of SWE-Agent by 20.00%, which demonstrates the practical value of high-quality documentation in supporting documentation-driven development.
Problem

Research questions and friction points this paper is trying to address.

repository-level documentation
software documentation evaluation
LLM-as-a-judge
documentation benchmark
code repository comprehension
Innovation

Methods, ideas, or system contributions that make the work stand out.

repository-level documentation
feature-driven QA
SWD-Bench
documentation-driven development
LLM evaluation
🔎 Similar Papers
No similar papers found.
Xinchen Wang
Xinchen Wang
Harbin Institute of Technology
AI4SECode Intelligence
Ruida Hu
Ruida Hu
Harbin Institute of Technology, Shenzhen
software engineeringLLM agent
C
Cuiyun Gao
Harbin Institute of Technology, Shenzhen, China
P
Pengfei Gao
Independent Researcher, China
C
Chao Peng
Independent Researcher, China