Enhancing Chest X-ray Classification through Knowledge Injection in Cross-Modality Learning

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the underutilization of medical knowledge in cross-modal chest X-ray (CXR) classification. We propose a set-theoretic knowledge injection framework that explicitly models anatomical structures, pathological features, and clinical relationships to generate fine-grained, controllable-granularity medical descriptive texts for targeted CLIP fine-tuning. Our method integrates domain-specific large language models with zero-shot learning, achieving 72.5% zero-shot classification accuracy on CheXpert—substantially outperforming human-annotated text baselines (49.9%). Key contributions include: (1) the first set-theory-driven, interpretable knowledge injection paradigm for medical vision-language modeling; (2) empirical validation that fine-grained, high-density medical knowledge critically enhances cross-modal diagnostic performance; and (3) a scalable, tunable knowledge-augmentation pathway toward clinically deployable image understanding.

Technology Category

Application Category

📝 Abstract
The integration of artificial intelligence in medical imaging has shown tremendous potential, yet the relationship between pre-trained knowledge and performance in cross-modality learning remains unclear. This study investigates how explicitly injecting medical knowledge into the learning process affects the performance of cross-modality classification, focusing on Chest X-ray (CXR) images. We introduce a novel Set Theory-based knowledge injection framework that generates captions for CXR images with controllable knowledge granularity. Using this framework, we fine-tune CLIP model on captions with varying levels of medical information. We evaluate the model's performance through zero-shot classification on the CheXpert dataset, a benchmark for CXR classification. Our results demonstrate that injecting fine-grained medical knowledge substantially improves classification accuracy, achieving 72.5% compared to 49.9% when using human-generated captions. This highlights the crucial role of domain-specific knowledge in medical cross-modality learning. Furthermore, we explore the influence of knowledge density and the use of domain-specific Large Language Models (LLMs) for caption generation, finding that denser knowledge and specialized LLMs contribute to enhanced performance. This research advances medical image analysis by demonstrating the effectiveness of knowledge injection for improving automated CXR classification, paving the way for more accurate and reliable diagnostic tools.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Chest X-ray Classification
Knowledge Injection in Cross-Modality Learning
Improving Automated Diagnostic Tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Set Theory-based knowledge injection
Fine-tuning CLIP on medical captions
Domain-specific LLMs for caption generation
🔎 Similar Papers
No similar papers found.
Yang Yan
Yang Yan
College of Information Science and Technology, Southwest Jiaotong UniversityCollege
Big data analysis and miningmulti-view learningintegrated learning and semi-supervised learning
B
B. Yue
Zhejiang University, Hangzhou, Zhejiang, China; The Second Affiliated Hospital Zhejiang University School of Medicine (SAHZU), Hangzhou, Zhejiang, China
Q
Qiaxuan Li
Zhejiang University, Hangzhou, Zhejiang, China; The Second Affiliated Hospital Zhejiang University School of Medicine (SAHZU), Hangzhou, Zhejiang, China
M
Man Huang
Zhejiang University, Hangzhou, Zhejiang, China; The Second Affiliated Hospital Zhejiang University School of Medicine (SAHZU), Hangzhou, Zhejiang, China
Jingyu Chen
Jingyu Chen
Huazhong University of Science and Technology
Computer VisionDeep Learning3D Vision
Zhenzhong Lan
Zhenzhong Lan
School of Engineering, Westlake University
NLPComputer VisionMultimedia