KOCO-BENCH: Can Large Language Models Leverage Domain Knowledge in Software Development?

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of large language models (LLMs) in domain-specific software development, where they struggle to effectively leverage external domain knowledge and lack dedicated evaluation benchmarks. To bridge this gap, we introduce the first benchmark that explicitly requires the use of structured external knowledge corpora, spanning six emerging domains, eleven frameworks, and twenty-five real-world projects. The benchmark features multi-granularity tasks—from function- to project-level code generation and knowledge comprehension—paired with rigorous test suites and multiple-choice question answering, enabling comprehensive evaluation of domain adaptation techniques such as supervised fine-tuning (SFT), retrieval-augmented generation (RAG), and kNN-based language models. Experimental results reveal significant performance deficiencies across current state-of-the-art models, with the best performer (Claude Code) achieving only 34.2% accuracy, highlighting substantial gaps in existing approaches for acquiring and applying domain-specific knowledge.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) excel at general programming but struggle with domain-specific software development, necessitating domain specialization methods for LLMs to learn and utilize domain knowledge and data. However, existing domain-specific code benchmarks cannot evaluate the effectiveness of domain specialization methods, which focus on assessing what knowledge LLMs possess rather than how they acquire and apply new knowledge, lacking explicit knowledge corpora for developing domain specialization methods. To this end, we present KOCO-BENCH, a novel benchmark designed for evaluating domain specialization methods in real-world software development. KOCO-BENCH contains 6 emerging domains with 11 software frameworks and 25 projects, featuring curated knowledge corpora alongside multi-granularity evaluation tasks including domain code generation (from function-level to project-level with rigorous test suites) and domain knowledge understanding (via multiple-choice Q&A). Unlike previous benchmarks that only provide test sets for direct evaluation, KOCO-BENCH requires acquiring and applying diverse domain knowledge (APIs, rules, constraints, etc.) from knowledge corpora to solve evaluation tasks. Our evaluations reveal that KOCO-BENCH poses significant challenges to state-of-the-art LLMs. Even with domain specialization methods (e.g., SFT, RAG, kNN-LM) applied, improvements remain marginal. Best-performing coding agent, Claude Code, achieves only 34.2%, highlighting the urgent need for more effective domain specialization methods. We release KOCO-BENCH, evaluation code, and baselines to advance further research at https://github.com/jiangxxxue/KOCO-bench.
Problem

Research questions and friction points this paper is trying to address.

domain-specific software development
large language models
knowledge corpora
domain specialization
code benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

domain specialization
knowledge corpora
code generation
LLM evaluation benchmark
software development
🔎 Similar Papers
No similar papers found.
Xue Jiang
Xue Jiang
Peking University
Program GenerationLLM4SE
J
Jiaru Qian
School of Computer Science, Peking University
X
Xianjie Shi
School of Computer Science, Peking University
C
Chenjie Li
School of Computer Science, Peking University
Hao Zhu
Hao Zhu
Peking University
AI4SE
Z
Ziyu Wang
School of Computer Science, Peking University
J
Jielun Zhang
School of Computer Science, Peking University
Z
Zheyu Zhao
School of Computer Science, Peking University
Kechi Zhang
Kechi Zhang
Peking University
AI4SE
J
Jia Li
School of Computer Science, Wuhan University
W
Wenpin Jiao
School of Computer Science, Peking University
Zhi Jin
Zhi Jin
Sun Yat-Sen University, Associate Professor
Ge Li
Ge Li
Full Professor of Computer Science, Peking University
Program AnalysisProgram GenerationDeep Learning
Yihong Dong
Yihong Dong
Peking University
Code GenerationLarge Language Models