🤖 AI Summary
Early breast cancer screening relies on mammography, yet existing computer-aided diagnosis (CAD) systems struggle to effectively fuse multimodal data—particularly imaging and clinical text—and often require prior patient history, limiting clinical deployability. This work proposes a novel vision-language fusion framework that jointly analyzes 2D mammograms and text descriptions automatically generated from structured metadata, eliminating dependence on historical clinical records. Methodologically, we introduce a lightweight clinical tokenization module for text processing and tightly couple convolutional neural networks (ConvNets) with language representation models to enable end-to-end multimodal feature alignment and fusion. Evaluated on an international screening cohort, our system achieves statistically significant improvements over unimodal baselines in both cancer detection and calcification identification tasks. It demonstrates robust generalization across diverse populations and satisfies key requirements for real-world clinical deployment.
📝 Abstract
Breast cancer remains the most commonly diagnosed malignancy among women in the developed world. Early detection through mammography screening plays a pivotal role in reducing mortality rates. While computer-aided diagnosis (CAD) systems have shown promise in assisting radiologists, existing approaches face critical limitations in clinical deployment - particularly in handling the nuanced interpretation of multi-modal data and feasibility due to the requirement of prior clinical history. This study introduces a novel framework that synergistically combines visual features from 2D mammograms with structured textual descriptors derived from easily accessible clinical metadata and synthesized radiological reports through innovative tokenization modules. Our proposed methods in this study demonstrate that strategic integration of convolutional neural networks (ConvNets) with language representations achieves superior performance to vision transformer-based models while handling high-resolution images and enabling practical deployment across diverse populations. By evaluating it on multi-national cohort screening mammograms, our multi-modal approach achieves superior performance in cancer detection and calcification identification compared to unimodal baselines, with particular improvements. The proposed method establishes a new paradigm for developing clinically viable VLM-based CAD systems that effectively leverage imaging data and contextual patient information through effective fusion mechanisms.