Comparing Few-Shot Prompting of GPT-4 LLMs with BERT Classifiers for Open-Response Assessment in Tutor Equity Training

📅 2025-01-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses three key challenges in fairness training for secondary-school mentors: difficulty in evaluating open-ended responses, high annotation costs, and poor model interpretability. We systematically compare fine-tuned BERT against few-shot prompting with GPT-4 variants (GPT-4o and GPT-4-Turbo) across four explanatory prediction tasks. Using a manually annotated dataset of 243 open-ended responses, our experiments—first of their kind in educational equity contexts—demonstrate that lightweight BERT fine-tuning significantly outperforms large language model (LLM) few-shot prompting in accuracy, computational efficiency, and result interpretability. These findings reveal inherent limitations of LLMs in education assessment tasks requiring fine-grained semantic discrimination and value-sensitive reasoning. The work establishes a new paradigm for AI-assisted educational evaluation that is highly sensitive to ethical considerations, cost-effective, and fully reproducible.

Technology Category

Application Category

📝 Abstract
Assessing learners in ill-defined domains, such as scenario-based human tutoring training, is an area of limited research. Equity training requires a nuanced understanding of context, but do contemporary large language models (LLMs) have a knowledge base that can navigate these nuances? Legacy transformer models like BERT, in contrast, have less real-world knowledge but can be more easily fine-tuned than commercial LLMs. Here, we study whether fine-tuning BERT on human annotations outperforms state-of-the-art LLMs (GPT-4o and GPT-4-Turbo) with few-shot prompting and instruction. We evaluate performance on four prediction tasks involving generating and explaining open-ended responses in advocacy-focused training lessons in a higher education student population learning to become middle school tutors. Leveraging a dataset of 243 human-annotated open responses from tutor training lessons, we find that BERT demonstrates superior performance using an offline fine-tuning approach, which is more resource-efficient than commercial GPT models. We conclude that contemporary GPT models may not adequately capture nuanced response patterns, especially in complex tasks requiring explanation. This work advances the understanding of AI-driven learner evaluation under the lens of fine-tuning versus few-shot prompting on the nuanced task of equity training, contributing to more effective training solutions and assisting practitioners in choosing adequate assessment methods.
Problem

Research questions and friction points this paper is trying to address.

GPT-4
BERT Classifier
Fair Training Evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned BERT
Open-ended Question Evaluation
Cost-effective AI Solution
🔎 Similar Papers
No similar papers found.