π€ AI Summary
To address the prevalent hallucination and weak disease diagnostic capability in X-ray report generation, this paper proposes M3KGβa hierarchical, multi-granularity medical knowledge graphβand introduces disease-aware visual tokens alongside a dual-path cross-attention mechanism to achieve deep alignment between visual features (extracted by Swin Transformer) and structured semantic knowledge (encoded by R-GCN). A Q-Former bridges the cross-modal representations, significantly enhancing diagnostic reasoning while suppressing textual hallucination. Evaluated on multiple public benchmarks, our method substantially improves clinical accuracy and disease description completeness of generated reports, outperforming existing state-of-the-art models. The implementation code is publicly available.
π Abstract
X-ray medical report generation is one of the important applications of artificial intelligence in healthcare. With the support of large foundation models, the quality of medical report generation has significantly improved. However, challenges such as hallucination and weak disease diagnostic capability still persist. In this paper, we first construct a large-scale multi-modal medical knowledge graph (termed M3KG) based on the ground truth medical report using the GPT-4o. It contains 2477 entities, 3 kinds of relations, 37424 triples, and 6943 disease-aware vision tokens for the CheXpert Plus dataset. Then, we sample it to obtain multi-granularity semantic graphs and use an R-GCN encoder for feature extraction. For the input X-ray image, we adopt the Swin-Transformer to extract the vision features and interact with the knowledge using cross-attention. The vision tokens are fed into a Q-former and retrieved the disease-aware vision tokens using another cross-attention. Finally, we adopt the large language model to map the semantic knowledge graph, input X-ray image, and disease-aware vision tokens into language descriptions. Extensive experiments on multiple datasets fully validated the effectiveness of our proposed knowledge graph and X-ray report generation framework. The source code of this paper will be released on https://github.com/Event-AHU/Medical_Image_Analysis.