🤖 AI Summary
This study addresses the reliability challenge posed by hallucinations in large language models (LLMs) for high-stakes medical education—specifically, multiple-choice question answering in dental prosthodontics. We propose the first correctness prediction framework integrating metadata (e.g., response length, token probability distribution, self-check confidence) with multidimensional hallucination signals, evaluated across GPT-4o and OSS-120B models under three prompting strategies. Our key finding is that while prompting strategies do not significantly alter overall accuracy, they markedly modulate the predictive utility of metadata for error detection; hallucination signals serve as strong error indicators, whereas metadata alone yields unreliable predictions. Using logistic regression with domain-informed feature engineering, our framework achieves 83.12% prediction accuracy—outperforming the “all-correct” baseline by 7.14%. This work establishes an interpretable, lightweight paradigm for trustworthy LLM evaluation in safety-critical domains.
📝 Abstract
Large language models (LLMs) are increasingly adopted in high-stakes domains such as healthcare and medical education, where the risk of generating factually incorrect (i.e., hallucinated) information is a major concern. While significant efforts have been made to detect and mitigate such hallucinations, predicting whether an LLM's response is correct remains a critical yet underexplored problem. This study investigates the feasibility of predicting correctness by analyzing a general-purpose model (GPT-4o) and a reasoning-centric model (OSS-120B) on a multiple-choice prosthodontics exam. We utilize metadata and hallucination signals across three distinct prompting strategies to build a correctness predictor for each (model, prompting) pair. Our findings demonstrate that this metadata-based approach can improve accuracy by up to +7.14% and achieve a precision of 83.12% over a baseline that assumes all answers are correct. We further show that while actual hallucination is a strong indicator of incorrectness, metadata signals alone are not reliable predictors of hallucination. Finally, we reveal that prompting strategies, despite not affecting overall accuracy, significantly alter the models' internal behaviors and the predictive utility of their metadata. These results present a promising direction for developing reliability signals in LLMs but also highlight that the methods explored in this paper are not yet robust enough for critical, high-stakes deployment.