Prompt-Based Caption Generation for Single-Tooth Dental Images Using Vision-Language Models

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing dental visual-language datasets primarily focus on full-mouth views or specific pathologies, lacking fine-grained annotations at the individual tooth level, which hinders the training of precise vision-language models. To address this gap, this work proposes a prompt-driven description generation framework that leverages vision-language models (VLMs) with carefully designed instructional prompts to produce semantically rich and visually aligned dental descriptions from single-tooth RGB images. Experimental results demonstrate that the proposed approach significantly improves both the quality of generated text and its alignment with image content, thereby enhancing VLM performance on single-tooth visual understanding tasks and filling a critical void in fine-grained dental visual-language data.

Technology Category

Application Category

📝 Abstract
Digital dentistry has made significant advances with the advent of deep learning. However, the majority of these deep learning-based dental image analysis models focus on very specific tasks such as tooth segmentation, tooth detection, cavity detection, and gingivitis classification. There is a lack of a specialized model that has holistic knowledge of teeth and can perform dental image analysis tasks based on that knowledge. Datasets of dental images with captions can help build such a model. To the best of our knowledge, existing dental image datasets with captions are few in number and limited in scope. In many of these datasets, the captions describe the entire mouth, while the images are limited to the anterior view. As a result, posterior teeth such as molars are not clearly visible, limiting the usefulness of the captions for training vision-language models. Additionally, the captions focus only on a specific disease (gingivitis) and do not provide a holistic assessment of each tooth. Moreover, tooth disease scores are typically assigned to individual teeth, and each tooth is treated as a separate entity in orthodontic procedures. Therefore, it is important to have captions for single-tooth images. As far as we know, no such dataset of single-tooth images with dental captions exists. In this work, we aim to bridge that gap by assessing the possibility of generating captions for dental images using Vision-Language Models (VLMs) and evaluating the extent and quality of those captions. Our findings suggest that guided prompts help VLMs generate meaningful captions. We show that the prompts generated by our framework are better anchored in describing the visual aspects of dental images. We selected RGB images as they have greater potential in consumer scenarios.
Problem

Research questions and friction points this paper is trying to address.

single-tooth dental images
caption generation
vision-language models
dental image dataset
holistic tooth assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

vision-language models
prompt-based captioning
single-tooth dental images
guided prompting
dental image captioning
🔎 Similar Papers
No similar papers found.
A
Anastasiia Sukhanova
Marshall University
A
Aiden Taylor
Marshall University
J
Julian Myers
Marshall University
Zichun Wang
Zichun Wang
Student, West Virginia State University, U.S.A.
AIMachine LearningLLM
K
Kartha Veerya Jammuladinne
West Virginia State University
S
Satya Sri Rajiteswari Nimmagadda
Marshall University
Aniruddha Maiti
Aniruddha Maiti
West Virginia State University
Artificial IntelligenceDeep LearningNLPData ScienceAI & Data Science in Medical Domain
Ananya Jana
Ananya Jana
Assistant Professor, Marshall University
Deep LearningArtificial IntelligenceBiomedical ImagingComputer VisionMachine Learning