DeepSeek performs better than other Large Language Models in Dental Cases

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior work lacks systematic empirical evaluation of large language models (LLMs) in longitudinal clinical reasoning for dentistry—particularly in interpreting dynamic periodontal case narratives. Method: We introduce a hybrid evaluation framework combining automated metrics (e.g., faithfulness) with blinded assessments by licensed dentists, applied to 34 standardized longitudinal dental cases in an open-ended clinical question-answering task. We benchmark DeepSeek against leading general-purpose LLMs. Results: DeepSeek achieves significantly higher faithfulness (median score: 0.528) and expert composite rating (4.5/5) while maintaining high readability. This study fills a critical empirical gap in LLM-based longitudinal oral health narrative analysis and demonstrates DeepSeek’s practical utility as a domain-specialized intelligent agent for medical education and clinical decision support.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) hold transformative potential in healthcare, yet their capacity to interpret longitudinal patient narratives remains inadequately explored. Dentistry, with its rich repository of structured clinical data, presents a unique opportunity to rigorously assess LLMs' reasoning abilities. While several commercial LLMs already exist, DeepSeek, a model that gained significant attention earlier this year, has also joined the competition. This study evaluated four state-of-the-art LLMs (GPT-4o, Gemini 2.0 Flash, Copilot, and DeepSeek V3) on their ability to analyze longitudinal dental case vignettes through open-ended clinical tasks. Using 34 standardized longitudinal periodontal cases (comprising 258 question-answer pairs), we assessed model performance via automated metrics and blinded evaluations by licensed dentists. DeepSeek emerged as the top performer, demonstrating superior faithfulness (median score = 0.528 vs. 0.367-0.457) and higher expert ratings (median = 4.5/5 vs. 4.0/5), without significantly compromising readability. Our study positions DeepSeek as the leading LLM for case analysis, endorses its integration as an adjunct tool in both medical education and research, and highlights its potential as a domain-specific agent.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to interpret longitudinal dental patient narratives
Assessing reasoning capabilities through standardized periodontal case vignettes
Comparing DeepSeek's clinical analysis performance against other state-of-the-art models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated four LLMs on dental case analysis
Used standardized longitudinal periodontal cases
DeepSeek showed superior faithfulness and expert ratings
🔎 Similar Papers
No similar papers found.
H
Hexian Zhang
Division of Applied Oral Sciences & Community Dental Care, Faculty of Dentistry, The University of Hong Kong, 34 Hospital Road, Hong Kong SAR, China.
X
Xinyu Yan
Division of Paediatric Dentistry & Orthodontics , Faculty of Dentistry, The University of Hong Kong, 34 Hospital Road, Hong Kong SAR, China.
Y
Yanqi Yang
Division of Paediatric Dentistry & Orthodontics , Faculty of Dentistry, The University of Hong Kong, 34 Hospital Road, Hong Kong SAR, China.
L
Lijian Jin
Division of Periodontology & Implant Dentistry , Faculty of Dentistry, The University of Hong Kong, 34 Hospital Road, Hong Kong SAR, China.
P
Ping Yang
Division of Epidemiology, Department of Quantitative Health Sciences, Mayo Clinic, Scottsdale, AZ 85259, USA
Junwen Wang
Junwen Wang
Faculty of Dentistry, The University of Hong Kong
BioinformaticsComputational GenomicsSystems BiologyPrecision DentistryPrecision Medicine