LLM-as-a-Fuzzy-Judge: Fine-Tuning Large Language Models as a Clinical Evaluation Judge with Fuzzy Logic

๐Ÿ“… 2025-06-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the challenges of scaling, standardizing, and objectivizing clinical communication competency assessment for medical studentsโ€”tasks traditionally reliant on subjective expert judgment. Methodologically, it introduces a human-preference-aligned, interpretable automated evaluation framework that uniquely integrates fuzzy logic with large language models (LLMs). A fine-grained fuzzy annotation scheme is constructed across four dimensions: professionalism, medical relevance, ethical conduct, and contextual interference; LLM scoring capability is optimized via prompt engineering and supervised fine-tuning (SFT). Experimental results demonstrate an overall assessment accuracy exceeding 80%, with over 90% accuracy on core dimensions. The work advances quantitative modeling of clinical judgment, significantly enhancing scalability, consistency, and interpretability in communication competency assessment within medical education.

Technology Category

Application Category

๐Ÿ“ Abstract
Clinical communication skills are critical in medical education, and practicing and assessing clinical communication skills on a scale is challenging. Although LLM-powered clinical scenario simulations have shown promise in enhancing medical students' clinical practice, providing automated and scalable clinical evaluation that follows nuanced physician judgment is difficult. This paper combines fuzzy logic and Large Language Model (LLM) and proposes LLM-as-a-Fuzzy-Judge to address the challenge of aligning the automated evaluation of medical students' clinical skills with subjective physicians' preferences. LLM-as-a-Fuzzy-Judge is an approach that LLM is fine-tuned to evaluate medical students' utterances within student-AI patient conversation scripts based on human annotations from four fuzzy sets, including Professionalism, Medical Relevance, Ethical Behavior, and Contextual Distraction. The methodology of this paper started from data collection from the LLM-powered medical education system, data annotation based on multidimensional fuzzy sets, followed by prompt engineering and the supervised fine-tuning (SFT) of the pre-trained LLMs using these human annotations. The results show that the LLM-as-a-Fuzzy-Judge achieves over 80% accuracy, with major criteria items over 90%, effectively leveraging fuzzy logic and LLM as a solution to deliver interpretable, human-aligned assessment. This work suggests the viability of leveraging fuzzy logic and LLM to align with human preferences, advances automated evaluation in medical education, and supports more robust assessment and judgment practices. The GitHub repository of this work is available at https://github.com/2sigmaEdTech/LLMAsAJudge
Problem

Research questions and friction points this paper is trying to address.

Automating clinical skills evaluation aligned with physician preferences
Assessing medical students' communication using fuzzy logic and LLMs
Enhancing interpretability and accuracy in medical education assessments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned LLM with fuzzy logic for evaluation
Multidimensional fuzzy sets for human-aligned assessment
Over 80% accuracy in clinical skill evaluation
๐Ÿ”Ž Similar Papers
No similar papers found.