Navigating Pitfalls: Evaluating LLMs in Machine Learning Programming Education

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates large language models’ (LLMs) ability to identify common machine learning (ML) programming pitfalls—such as data leakage and improper model selection—and generate actionable, pedagogically sound feedback in ML education. Method: We conduct a multi-model comparative study involving one proprietary and three open-weight LLMs, using a curated dataset of real student code submissions. Evaluation employs dual criteria: human-annotated ground truth for error detection and rubric-based assessment of feedback quality. Contribution/Results: We reveal a critical limitation: LLMs exhibit significantly lower accuracy in detecting early-stage pipeline errors (e.g., train-test contamination). Surprisingly, performance gaps between open and closed models are far smaller than their parameter-count disparities suggest—smaller models achieve near-parity with larger ones. The results empirically validate the feasibility of lightweight, privacy-preserving, domain-adapted LLMs for ML education. Our work establishes precise capability boundaries of LLMs in ML pedagogy and provides evidence-based guidance for developing low-cost, high-fidelity educational tools.

Technology Category

Application Category

📝 Abstract
The rapid advancement of Large Language Models (LLMs) has opened new avenues in education. This study examines the use of LLMs in supporting learning in machine learning education; in particular, it focuses on the ability of LLMs to identify common errors of practice (pitfalls) in machine learning code, and their ability to provide feedback that can guide learning. Using a portfolio of code samples, we consider four different LLMs: one closed model and three open models. Whilst the most basic pitfalls are readily identified by all models, many common pitfalls are not. They particularly struggle to identify pitfalls in the early stages of the ML pipeline, especially those which can lead to information leaks, a major source of failure within applied ML projects. They also exhibit limited success at identifying pitfalls around model selection, which is a concept that students often struggle with when first transitioning from theory to practice. This questions the use of current LLMs to support machine learning education, and also raises important questions about their use by novice practitioners. Nevertheless, when LLMs successfully identify pitfalls in code, they do provide feedback that includes advice on how to proceed, emphasising their potential role in guiding learners. We also compare the capability of closed and open LLM models, and find that the gap is relatively small given the large difference in model sizes. This presents an opportunity to deploy, and potentially customise, smaller more efficient LLM models within education, avoiding risks around cost and data sharing associated with commercial models.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to identify common machine learning code pitfalls
Assessing LLMs' feedback quality for guiding ML programming education
Comparing closed vs open LLM performance in error detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates LLMs for error detection in ML code
Compares closed and open LLMs performance
Explores LLMs potential in guiding ML learners
🔎 Similar Papers
No similar papers found.
S
Smitha S Kumar
School of Mathematical and Computer Sciences, Heriot-Watt University, United Arab Emirates.
M
M. Lones
School of Mathematical and Computer Sciences, Heriot-Watt University, United Kingdom.
Manuel Maarek
Manuel Maarek
School of Mathematical and Computer Sciences, Heriot-Watt University, United Kingdom.
Hind Zantout
Hind Zantout
Heriot-Watt University Dubai
EducationMachine Learning in HealthcareCybersecuritySemantic TechnologiesGender Studies