🤖 AI Summary
Existing large vision-language models (LVLMs) perform well on general medical tasks but exhibit significant limitations in dental panoramic radiograph analysis due to the lack of domain-specific multimodal data and standardized evaluation benchmarks tailored to dense anatomical structures and subtle pathological features. To address this gap, we introduce MMOral—the first multimodal instruction-tuning dataset for oral imaging—comprising 20,563 panoramic X-ray images and 1.3 million expert-curated instructions—alongside MMOral-Bench, a comprehensive evaluation benchmark. We further propose OralGPT, a lightweight fine-tuned variant of Qwen2.5-VL-7B, achieving a 24.73% improvement over the baseline after single-round supervised fine-tuning. Notably, GPT-4o scores only 41.45% on MMOral-Bench, underscoring the task’s difficulty and the necessity of domain adaptation. This work establishes the first reproducible foundation for dental multimodal AI, providing a benchmarked dataset, rigorous evaluation protocol, and effective methodology for specialized medical LVM development.
📝 Abstract
Recent advances in large vision-language models (LVLMs) have demonstrated strong performance on general-purpose medical tasks. However, their effectiveness in specialized domains such as dentistry remains underexplored. In particular, panoramic X-rays, a widely used imaging modality in oral radiology, pose interpretative challenges due to dense anatomical structures and subtle pathological cues, which are not captured by existing medical benchmarks or instruction datasets. To this end, we introduce MMOral, the first large-scale multimodal instruction dataset and benchmark tailored for panoramic X-ray interpretation. MMOral consists of 20,563 annotated images paired with 1.3 million instruction-following instances across diverse task types, including attribute extraction, report generation, visual question answering, and image-grounded dialogue. In addition, we present MMOral-Bench, a comprehensive evaluation suite covering five key diagnostic dimensions in dentistry. We evaluate 64 LVLMs on MMOral-Bench and find that even the best-performing model, i.e., GPT-4o, only achieves 41.45% accuracy, revealing significant limitations of current models in this domain. To promote the progress of this specific domain, we also propose OralGPT, which conducts supervised fine-tuning (SFT) upon Qwen2.5-VL-7B with our meticulously curated MMOral instruction dataset. Remarkably, a single epoch of SFT yields substantial performance enhancements for LVLMs, e.g., OralGPT demonstrates a 24.73% improvement. Both MMOral and OralGPT hold significant potential as a critical foundation for intelligent dentistry and enable more clinically impactful multimodal AI systems in the dental field. The dataset, model, benchmark, and evaluation suite are available at https://github.com/isbrycee/OralGPT.