🤖 AI Summary
Existing Dev Knowledge QA benchmarks suffer from narrow coverage—overemphasizing code understanding—and non-realistic data. This paper introduces SimpleDevQA, the first lightweight, multilingual (English, Chinese, Russian) benchmark tailored to authentic development scenarios (2,740 QA pairs), focusing exclusively on non-code, concise, verifiable knowledge questions. Methodologically, we first uncover that Dev QA constitutes 39.6% of real-world developer interactions via WildChat dialogue analysis; then propose a three-stage dialogue distillation pipeline (cleaning → simplification → verification) to construct high-quality, realistic QA instances; and integrate RAG to enhance knowledge retrieval. Experiments show: (1) Code-specialized LLMs significantly outperform same-scale general-purpose LLMs; (2) RAG boosts average accuracy by 11.3%; (3) models exhibit systematic overconfidence, yet confidence scores correlate positively with accuracy; (4) stronger code generation capability strongly predicts superior Dev QA performance.
📝 Abstract
The Development Knowledge Question Answering (Dev Knowledge QA) task aims to provide natural language answers to knowledge-seeking questions during software development. To investigate its importance and to what extent it has been explored, we analyze real user-LLM dialogues from WildChat and find that: (1) The Dev Knowledge QA task accounts for 39.6% of interactions(highest among all tasks), revealing broad knowledge needs beyond code generation (32.3%). (2) Only 27.5% of real Dev Knowledge QA dialogues focus on code understanding, leaving out development knowledge-seeking. (3) Only 17.1% of real-world Dev Knowledge QA dialogues can be used for constructing a benchmark. Existing benchmarks have two primary limitations for evaluating the Dev Knowledge QA capability of LLMs. First, existing benchmarks offer a limited development knowledge scope, mainly focusing on code understanding and neglecting broader knowledge during development. Second, some benchmarks are not built from real user queries. To bridge this gap, we design a three-phase pipeline that transforms real-world dialogue into simple development knowledge-seeking QA pairs. Through this pipeline, we introduce SimpleDevQA, a multilingual benchmark derived from real user dialogues. It contains 2,740 QA pairs in three languages (English, Chinese, and Russian), and focuses on questions with unique, short, and verifiable answers for accurate and simple evaluation. Experiments show that: Code LLMs generally outperform general LLMs of similar scale; Knowledge injection with the Retrieval-Augmented Generation (RAG) strategy can boost LLM accuracy by 11.3% on average; LLMs show systematic overconfidence in Dev Knowledge QA, and the answering accuracy of LLMs shows a positive correlation with their stated confidence; Generally, LLMs with stronger code generation performance also exhibit stronger performance in Dev Knowledge QA.