🤖 AI Summary
This study addresses the lack of systematic evaluation of large language models (LLMs) in computer science education aligned with formal curricula and certification standards. The authors construct a multilingual benchmark dataset comprising 1,068 questions drawn from six professional certification exams, uniquely integrating Bloom’s cognitive taxonomy, multilingual contexts, and accreditation criteria. They conduct fine-grained, multidimensional assessments of prominent models—including GPT-5, DeepSeek-R1, Qwen-Plus, and Llama-3.3-70B-Instruct—and introduce novel analyses of input-mask robustness and confidence–accuracy alignment. Results reveal a significant performance drop in higher-order cognitive tasks. GPT-5 excels in English-language tasks, Qwen-Plus demonstrates superior performance in Chinese contexts, and DeepSeek-R1 exhibits the most balanced cross-lingual capabilities, offering empirical evidence and practical guidance for educational deployment of LLMs.
📝 Abstract
Large language models (LLMs) are increasingly applied in computer science education for tasks such as tutoring, content generation, and code assessment. However, systematic evaluations aligned with formal curricula and certification standards remain limited. This study benchmarked four recent models, including GPT-5, DeepSeek-R1, Qwen-Plus, and Llama-3.3-70B-Instruct, using a dataset of 1,068 questions derived from six certification exams covering networking, office applications, and Java programming.
We evaluated performance across language (Chinese vs. English), cognitive levels based on Bloom's Taxonomy, domain knowledge, confidence-accuracy alignment, and robustness to input masking. Results showed that GPT-5 performed best on English-language certifications, while Qwen-Plus performed better in Chinese contexts. DeepSeek-R1 achieved the most balanced cross-lingual performance, whereas Llama-3.3 showed clear limitations in higher-order reasoning and robustness. All models performed worse on more complex tasks.
These findings provide empirical support for the integration of LLMs into computer science education and offer practical implications for curriculum design and assessment.