Breast Cancer VLMs: Clinically Practical Vision-Language Train-Inference Models

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Early breast cancer screening relies on mammography, yet existing computer-aided diagnosis (CAD) systems struggle to effectively fuse multimodal data—particularly imaging and clinical text—and often require prior patient history, limiting clinical deployability. This work proposes a novel vision-language fusion framework that jointly analyzes 2D mammograms and text descriptions automatically generated from structured metadata, eliminating dependence on historical clinical records. Methodologically, we introduce a lightweight clinical tokenization module for text processing and tightly couple convolutional neural networks (ConvNets) with language representation models to enable end-to-end multimodal feature alignment and fusion. Evaluated on an international screening cohort, our system achieves statistically significant improvements over unimodal baselines in both cancer detection and calcification identification tasks. It demonstrates robust generalization across diverse populations and satisfies key requirements for real-world clinical deployment.

Technology Category

Application Category

📝 Abstract
Breast cancer remains the most commonly diagnosed malignancy among women in the developed world. Early detection through mammography screening plays a pivotal role in reducing mortality rates. While computer-aided diagnosis (CAD) systems have shown promise in assisting radiologists, existing approaches face critical limitations in clinical deployment - particularly in handling the nuanced interpretation of multi-modal data and feasibility due to the requirement of prior clinical history. This study introduces a novel framework that synergistically combines visual features from 2D mammograms with structured textual descriptors derived from easily accessible clinical metadata and synthesized radiological reports through innovative tokenization modules. Our proposed methods in this study demonstrate that strategic integration of convolutional neural networks (ConvNets) with language representations achieves superior performance to vision transformer-based models while handling high-resolution images and enabling practical deployment across diverse populations. By evaluating it on multi-national cohort screening mammograms, our multi-modal approach achieves superior performance in cancer detection and calcification identification compared to unimodal baselines, with particular improvements. The proposed method establishes a new paradigm for developing clinically viable VLM-based CAD systems that effectively leverage imaging data and contextual patient information through effective fusion mechanisms.
Problem

Research questions and friction points this paper is trying to address.

Improves breast cancer detection using mammography and clinical data
Overcomes limitations in multi-modal data interpretation for CAD systems
Enables practical clinical deployment without requiring prior patient history
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines mammogram visuals with clinical metadata
Integrates ConvNets with language representations
Uses tokenization modules for multimodal data fusion
🔎 Similar Papers
No similar papers found.
S
Shunjie-Fabian Zheng
Department of Medicine I, LMU University Hospital, LMU Munich, Germany
H
Hyeonjun Lee
Lunit Inc.
Thijs Kooi
Thijs Kooi
Lunit Inc.
Machine LearningMedical Image AnalysisComputer aided diagnosis
Ali Diba
Ali Diba
Lunit Inc.