Predicting LLM Correctness in Prosthodontics Using Metadata and Hallucination Signals

📅 2025-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the reliability challenge posed by hallucinations in large language models (LLMs) for high-stakes medical education—specifically, multiple-choice question answering in dental prosthodontics. We propose the first correctness prediction framework integrating metadata (e.g., response length, token probability distribution, self-check confidence) with multidimensional hallucination signals, evaluated across GPT-4o and OSS-120B models under three prompting strategies. Our key finding is that while prompting strategies do not significantly alter overall accuracy, they markedly modulate the predictive utility of metadata for error detection; hallucination signals serve as strong error indicators, whereas metadata alone yields unreliable predictions. Using logistic regression with domain-informed feature engineering, our framework achieves 83.12% prediction accuracy—outperforming the “all-correct” baseline by 7.14%. This work establishes an interpretable, lightweight paradigm for trustworthy LLM evaluation in safety-critical domains.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly adopted in high-stakes domains such as healthcare and medical education, where the risk of generating factually incorrect (i.e., hallucinated) information is a major concern. While significant efforts have been made to detect and mitigate such hallucinations, predicting whether an LLM's response is correct remains a critical yet underexplored problem. This study investigates the feasibility of predicting correctness by analyzing a general-purpose model (GPT-4o) and a reasoning-centric model (OSS-120B) on a multiple-choice prosthodontics exam. We utilize metadata and hallucination signals across three distinct prompting strategies to build a correctness predictor for each (model, prompting) pair. Our findings demonstrate that this metadata-based approach can improve accuracy by up to +7.14% and achieve a precision of 83.12% over a baseline that assumes all answers are correct. We further show that while actual hallucination is a strong indicator of incorrectness, metadata signals alone are not reliable predictors of hallucination. Finally, we reveal that prompting strategies, despite not affecting overall accuracy, significantly alter the models' internal behaviors and the predictive utility of their metadata. These results present a promising direction for developing reliability signals in LLMs but also highlight that the methods explored in this paper are not yet robust enough for critical, high-stakes deployment.
Problem

Research questions and friction points this paper is trying to address.

Predicting LLM correctness in prosthodontics using metadata and hallucination signals
Evaluating metadata-based approach to improve accuracy over baseline assumptions
Analyzing how prompting strategies alter model behaviors and predictive utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using metadata and hallucination signals to predict LLM correctness
Applying three prompting strategies to analyze model behavior
Achieving improved accuracy with metadata-based correctness predictors
🔎 Similar Papers
No similar papers found.
Lucky Susanto
Lucky Susanto
Monash University Indonesia
Natural Language ProcessingMachine LearningNeural Machine TranslationLow Resrouce Settings
A
Anasta Pranawijayana
Independent Researcher
C
Cortino Sukotjo
Department of Prosthodontics, University of Pittsburgh, Pittsburgh, Pennsylvania
S
Soni Prasad
Department of Restorative Sciences, University of North Carolina Adams School of Dentistry, Chapel Hill, North Carolina
D
Derry Wijaya
Department of Data Science, Monash University Indonesia, Tangerang, Indonesia; Department of Computer Science, Boston University, Boston, Massachusetts