VArsity: Can Large Language Models Keep Power Engineering Students in Phase?

📅 2025-07-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the pedagogical impact of large language models (LLMs) in power systems analysis and control education, focusing on students’ evolving ability to detect and correct LLM-generated errors. Using canonical tasks—including power factor correction—we comparatively evaluate GPT-4 and ChatGPT-o1 across course assignments and structured error-analysis experiments. Results show that while o1 produces more fluent outputs, its errors are significantly more subtle and harder for students to identify than those of GPT-4—reducing detection rates by a statistically significant margin. This reveals a critical challenge: iterative LLM improvements may inadvertently undermine the development of engineering critical thinking by masking inaccuracies. The study innovatively incorporates *error stealthiness*—a previously unaddressed LLM attribute—into educational assessment frameworks. It advocates shifting AI-augmented pedagogy from passive reliance on model outputs toward deliberate training in *human–AI collaborative verification*. These findings provide empirical evidence and methodological guidance for rethinking engineering education in the AI era.

Technology Category

Application Category

📝 Abstract
This paper provides an educational case study regarding our experience in deploying ChatGPT Large Language Models (LLMs) in the Spring 2025 and Fall 2023 offerings of ECE 4320: Power System Analysis and Control at Georgia Tech. As part of course assessments, students were tasked with identifying, explaining, and correcting errors in the ChatGPT outputs corresponding to power factor correction problems. While most students successfully identified the errors in the outputs from the GPT-4 version of ChatGPT used in Fall 2023, students found the errors from the ChatGPT o1 version much more difficult to identify in Spring 2025. As shown in this case study, the role of LLMs in pedagogy, assessment, and learning in power engineering classrooms is an important topic deserving further investigation.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' accuracy in power factor correction problems
Evaluating student ability to identify LLM-generated errors
Exploring LLMs' role in power engineering education
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using ChatGPT for power engineering education
Students correct errors in LLM outputs
Comparing GPT-4 and ChatGPT o1 performance
🔎 Similar Papers
No similar papers found.
S
Samuel Talkington
School of Electrical and Computer Engineering, Georgia Institute of Technology
Daniel K. Molzahn
Daniel K. Molzahn
Associate Professor at the Georgia Institute of Technology. Computational Engineer at Argonne
Electric Power Systems