CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery

πŸ“… 2024-06-12
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 8
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing LLM evaluations in computer science (CS) overemphasize sub-skills like mathematics and programming, lacking comprehensive, multilingual, and systematic assessment across the full CS domain. Method: We introduce CS-Benchβ€”the first large-scale, multilingual (English, Chinese, French, German) benchmark covering 26 CS subfields with ~10K high-quality, expert-annotated samples. We design a novel multidimensional evaluation framework enabling unified assessment across subfields, task formats (e.g., multiple-choice, open-ended), and languages, grounded in dual-dimension question design (knowledge recall and reasoning) and extensive zero-/few-shot evaluation. Contribution/Results: We systematically evaluate 30+ mainstream LLMs, revealing strong correlations between CS competence and mathematical/programming proficiency, nonlinear scaling with model size, and two critical bottlenecks: domain-specific knowledge gaps and weak reasoning capabilities. We further demonstrate significant cross-task transfer advantages of math/code-specialized models on general CS tasks.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) have demonstrated significant potential in advancing various fields of research and society. However, the current community of LLMs overly focuses on benchmarks for analyzing specific foundational skills (e.g. mathematics and code generation), neglecting an all-round evaluation of the computer science field. To bridge this gap, we introduce CS-Bench, the first multilingual (English, Chinese, French, German) benchmark dedicated to evaluating the performance of LLMs in computer science. CS-Bench comprises approximately 10K meticulously curated test samples, covering 26 subfields across 4 key areas of computer science, encompassing various task forms and divisions of knowledge and reasoning. Utilizing CS-Bench, we conduct a comprehensive evaluation of over 30 mainstream LLMs, revealing the relationship between CS performance and model scales. We also quantitatively analyze the reasons for failures in existing LLMs and highlight directions for improvements, including knowledge supplementation and CS-specific reasoning. Further cross-capability experiments show a high correlation between LLMs' capabilities in computer science and their abilities in mathematics and coding. Moreover, expert LLMs specialized in mathematics and coding also demonstrate strong performances in several CS subfields. Looking ahead, we envision CS-Bench serving as a cornerstone for LLM applications in the CS field and paving new avenues in assessing LLMs' diverse reasoning capabilities. The CS-Bench data and evaluation code are available at https://github.com/csbench/csbench.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLMs' performance in computer science comprehensively.
Identify gaps in LLMs' knowledge and reasoning in CS.
Assess correlation between CS, math, and coding capabilities.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual benchmark for LLMs in computer science
10K test samples across 26 CS subfields
Evaluation of 30+ LLMs, linking CS performance to scale
πŸ”Ž Similar Papers
No similar papers found.
Xiaoshuai Song
Xiaoshuai Song
Beijing University of Posts and Telecommunications
M
Muxi Diao
Beijing University of Posts and Telecommunications, Beijing, China
Guanting Dong
Guanting Dong
Remin University of China
LLM Reasoning & AlignmentDeep Search AgentAgentic RL
Z
Zhengyang Wang
Beijing University of Posts and Telecommunications, Beijing, China
Yujia Fu
Yujia Fu
Beijing University of Posts and Telecommunications
LLMs NLP
R
Runqi Qiao
Beijing University of Posts and Telecommunications, Beijing, China
Z
Zhexu Wang
Beijing University of Posts and Telecommunications, Beijing, China
Dayuan Fu
Dayuan Fu
MS Student, Beijing University of Posts and Telecommunications
LLM Agentspost-trainingNatural Language Processing
H
Huangxuan Wu
Beijing University of Posts and Telecommunications, Beijing, China
B
Bin Liang
Beijing University of Posts and Telecommunications, Beijing, China
Weihao Zeng
Weihao Zeng
Hong Kong University of Science and Technology
LLM ReasoningAlignment
Yejie Wang
Yejie Wang
Beijing University of Posts and Telecommunications
Natural Language Processing
Z
Zhuoma GongQue
Beijing University of Posts and Telecommunications, Beijing, China
J
Jianing Yu
Beijing University of Posts and Telecommunications, Beijing, China
Q
Qiuna Tan
Beijing University of Posts and Telecommunications, Beijing, China
Weiran Xu
Weiran Xu
Associate professor of natural language processing, Beijing University of Posts and Telecommunications
natural language processing