🤖 AI Summary
This study presents the first systematic evaluation of ChatGPT (GPT-4–class models) in Spanish-language programming examinations, assessing its dual capability as both a problem solver and an automated grader. Using 32 authentic undergraduate computer science exam questions, we employ zero-shot and few-shot prompting strategies, validated against human-annotated ground truth and evaluated via multi-dimensional consistency analysis (Cohen’s κ). Results indicate that ChatGPT achieves 68% accuracy on simple programming tasks but exhibits significant limitations on complex logic problems and peer-code assessment (κ < 0.3), rendering it unsuitable as a standalone replacement for human evaluation. Key contributions include: (1) the first dual-task evaluation framework tailored to Spanish-language programming education; and (2) the open release of the first multidimensional Spanish programming exam corpus—comprising 32 real exam questions and 128 structured prompts—along with corresponding prompt templates, establishing critical infrastructure for non-English AI-assisted programming education research.
📝 Abstract
Evaluating the capabilities of Large Language Models (LLMs) to assist teachers and students in educational tasks is receiving increasing attention. In this paper, we assess ChatGPT's capacities to solve and grade real programming exams, from an accredited BSc degree in Computer Science, written in Spanish. Our findings suggest that this AI model is only effective for solving simple coding tasks. Its proficiency in tackling complex problems or evaluating solutions authored by others are far from effective. As part of this research, we also release a new corpus of programming tasks and the corresponding prompts for solving the problems or grading the solutions. This resource can be further exploited by other research teams.