AGA: An adaptive group alignment framework for structured medical cross-modal representation learning

📅 2025-07-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing medical vision-language pretraining methods typically flatten clinical reports into unstructured tokens and rely heavily on large-scale, hard-to-obtain negative samples, limiting their applicability to small-scale medical datasets. To address this, we propose Adaptive Group Alignment (AGA), a framework that models structured correspondences between image regions and fine-grained semantic units in clinical reports via bidirectional grouping. Our key contributions are: (1) an instance-aware dynamic threshold gating module for adaptive grouping of visual and linguistic features; (2) an instance-aware group alignment loss that requires no external negative samples; and (3) efficient bidirectional cross-modal alignment based on a sparse similarity matrix. Extensive experiments on multiple public and private medical benchmarks demonstrate that AGA significantly outperforms state-of-the-art baselines on image–text retrieval and classification tasks. Moreover, it supports both zero-shot transfer and downstream fine-tuning, establishing new performance frontiers for resource-constrained medical multimodal learning.

Technology Category

Application Category

📝 Abstract
Learning medical visual representations from paired images and reports is a promising direction in representation learning. However, current vision-language pretraining methods in the medical domain often simplify clinical reports into single entities or fragmented tokens, ignoring their inherent structure. In addition, contrastive learning frameworks typically depend on large quantities of hard negative samples, which is impractical for small-scale medical datasets. To tackle these challenges, we propose Adaptive Grouped Alignment (AGA), a new framework that captures structured semantics from paired medical images and reports. AGA introduces a bidirectional grouping mechanism based on a sparse similarity matrix. For each image-report pair, we compute fine-grained similarities between text tokens and image patches. Each token selects its top-matching patches to form a visual group, and each patch selects its most related tokens to form a language group. To enable adaptive grouping, we design two threshold gating modules, called Language Grouped Threshold Gate and Vision Grouped Threshold Gate, which learn grouping thresholds dynamically. Group representations are computed as weighted averages based on similarity scores. To align each token with its group representation, we introduce an Instance Aware Group Alignment loss that operates within each image-text pair, removing the need for external negatives. Finally, a Bidirectional Cross-modal Grouped Alignment module is applied to enhance fine-grained alignment between visual and linguistic group representations. Extensive experiments on public and private datasets show that our method achieves strong performance on image-text retrieval and classification tasks under both fine-tuning and zero-shot settings.
Problem

Research questions and friction points this paper is trying to address.

Captures structured semantics from medical images and reports
Eliminates need for hard negative samples in contrastive learning
Enhances fine-grained alignment between visual and linguistic groups
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive grouping with sparse similarity matrix
Dynamic threshold gating for flexible grouping
Instance-aware alignment without external negatives
🔎 Similar Papers
No similar papers found.
W
Wei Li
School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, China 611756
X
Xun Gong
School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, China 611756
Jiao Li
Jiao Li
Columbia University
Applied MathMachine LearningFinanceClimate Change
Xiaobin Sun
Xiaobin Sun
Department of Gastroenterology, The Third People’s Hospital of Chengdu, Chengdu, China 610031