🤖 AI Summary
This work addresses the challenge of subjective bias in human assessment of soft skills—such as empathy, ethical judgment, and communication—during Multiple Mini-Interviews (MMIs), and the inability of existing large language model (LLM) approaches to effectively capture implicit signals in narrative responses. The authors propose the first structured multi-agent prompting framework that decouples the scoring process into two stages: transcription refinement and criterion-specific evaluation. By integrating three-shot in-context learning with a large instruction-tuned model, the method enables automated scoring without additional training. Evaluated on an MMI dataset, it achieves an average Quadratic Weighted Kappa (QWK) of 0.62, substantially outperforming a specialized fine-tuned baseline (0.32), and attains state-of-the-art performance on the ASAP benchmark, demonstrating its reliability and generalization capability in complex, subjectively reasoned tasks.
📝 Abstract
Assessing soft skills such as empathy, ethical judgment, and communication is essential in competitive selection processes, yet human scoring is often inconsistent and biased. While Large Language Models (LLMs) have improved Automated Essay Scoring (AES), we show that state-of-the-art rationale-based fine-tuning methods struggle with the abstract, context-dependent nature of Multiple Mini-Interviews (MMIs), missing the implicit signals embedded in candidate narratives. We introduce a multi-agent prompting framework that breaks down the evaluation process into transcript refinement and criterion-specific scoring. Using 3-shot in-context learning with a large instruct-tuned model, our approach outperforms specialised fine-tuned baselines (Avg QWK 0.62 vs 0.32) and achieves reliability comparable to human experts. We further demonstrate the generalisability of our framework on the ASAP benchmark, where it rivals domain-specific state-of-the-art models without additional training. These findings suggest that for complex, subjective reasoning tasks, structured prompt engineering may offer a scalable alternative to data-intensive fine-tuning, altering how LLMs can be applied to automated assessment.