🤖 AI Summary
This study addresses the challenge that existing vision-language models struggle to generate clinically appropriate chest X-ray diagnostic reports in Vietnamese due to the scarcity of domain-specific medical data. To bridge this gap, the authors introduce ViX-Ray, the first multimodal dataset tailored to Vietnamese clinical settings, comprising 5,400 chest X-ray images paired with expert-written radiology reports, and provide a systematic analysis of its linguistic characteristics. Leveraging this dataset, they fine-tune five open-source vision-language models and benchmark their performance against closed-source counterparts such as GPT-4V and Gemini. Results demonstrate that fine-tuned open-source models approach clinical-level description quality on certain tasks, yet still exhibit hallucinations and insufficient precision in impression generation, underscoring both the difficulty and benchmarking value of ViX-Ray.
📝 Abstract
Vietnamese medical research has become an increasingly vital domain, particularly with the rise of intelligent technologies aimed at reducing time and resource burdens in clinical diagnosis. Recent advances in vision-language models (VLMs), such as Gemini and GPT-4V, have sparked a growing interest in applying AI to healthcare. However, most existing VLMs lack exposure to Vietnamese medical data, limiting their ability to generate accurate and contextually appropriate diagnostic outputs for Vietnamese patients. To address this challenge, we introduce ViX-Ray, a novel dataset comprising 5,400 Vietnamese chest X-ray images annotated with expert-written findings and impressions from physicians at a major Vietnamese hospital. We analyze linguistic patterns within the dataset, including the frequency of mentioned body parts and diagnoses, to identify domain-specific linguistic characteristics of Vietnamese radiology reports. Furthermore, we fine-tune five state-of-the-art open-source VLMs on ViX-Ray and compare their performance to leading proprietary models, GPT-4V and Gemini. Our results show that while several models generate outputs partially aligned with clinical ground truths, they often suffer from low precision and excessive hallucination, especially in impression generation. These findings not only demonstrate the complexity and challenge of our dataset but also establish ViX-Ray as a valuable benchmark for evaluating and advancing vision-language models in the Vietnamese clinical domain.