🤖 AI Summary
This work addresses the limitation of current large language models (LLMs) in software engineering question answering, which predominantly operate within single-file contexts and struggle with cross-file, system-level program understanding tasks. To bridge this gap, we introduce and release StackRepoQA, the first repository-level QA benchmark comprising 1,318 developer questions from 134 open-source Java projects. Through systematic evaluation of prominent LLMs under various settings—including direct prompting, agent frameworks, and retrieval-augmented generation (RAG)—augmented with file retrieval and structural dependency graphs, we find that model accuracy remains generally low. Performance gains largely stem from memorization and reproduction of Stack Overflow answers rather than genuine reasoning capabilities. This study highlights the shortcomings of existing models in repository-scale comprehension and motivates further research into disentangling memorization from true reasoning in code-related tasks.
📝 Abstract
Large Language Models (LLMs) have shown impressive capabilities across software engineering tasks, including question answering (QA). However, most studies and benchmarks focus on isolated functions or single-file snippets, overlooking the challenges of real-world program comprehension, which often spans multiple files and system-level dependencies. In this work, we introduce StackRepoQA, the first multi-project, repository-level question answering dataset constructed from 1,318 real developer questions and accepted answers across 134 open-source Java projects. Using this dataset, we systematically evaluate two widely used LLMs (Claude 3.5 Sonnet and GPT-4o) under both direct prompting and agentic configurations. We compare baseline performance with retrieval-augmented generation methods that leverage file-level retrieval and graph-based representations of structural dependencies. Our results show that LLMs achieve moderate accuracy at baseline, with performance improving when structural signals are incorporated. Nonetheless, overall accuracy remains limited for repository-scale comprehension. The analysis reveals that high scores often result from verbatim reproduction of Stack Overflow answers rather than genuine reasoning. To our knowledge, this is the first empirical study to provide such evidence in repository-level QA. We release StackRepoQA to encourage further research into benchmarks, evaluation protocols, and augmentation strategies that disentangle memorization from reasoning, advancing LLMs as reliable tool for repository-scale program comprehension.