🤖 AI Summary
Existing medical vision-language pretraining methods typically flatten clinical reports into unstructured tokens and rely heavily on large-scale, hard-to-obtain negative samples, limiting their applicability to small-scale medical datasets. To address this, we propose Adaptive Group Alignment (AGA), a framework that models structured correspondences between image regions and fine-grained semantic units in clinical reports via bidirectional grouping. Our key contributions are: (1) an instance-aware dynamic threshold gating module for adaptive grouping of visual and linguistic features; (2) an instance-aware group alignment loss that requires no external negative samples; and (3) efficient bidirectional cross-modal alignment based on a sparse similarity matrix. Extensive experiments on multiple public and private medical benchmarks demonstrate that AGA significantly outperforms state-of-the-art baselines on image–text retrieval and classification tasks. Moreover, it supports both zero-shot transfer and downstream fine-tuning, establishing new performance frontiers for resource-constrained medical multimodal learning.
📝 Abstract
Learning medical visual representations from paired images and reports is a promising direction in representation learning. However, current vision-language pretraining methods in the medical domain often simplify clinical reports into single entities or fragmented tokens, ignoring their inherent structure. In addition, contrastive learning frameworks typically depend on large quantities of hard negative samples, which is impractical for small-scale medical datasets. To tackle these challenges, we propose Adaptive Grouped Alignment (AGA), a new framework that captures structured semantics from paired medical images and reports. AGA introduces a bidirectional grouping mechanism based on a sparse similarity matrix. For each image-report pair, we compute fine-grained similarities between text tokens and image patches. Each token selects its top-matching patches to form a visual group, and each patch selects its most related tokens to form a language group. To enable adaptive grouping, we design two threshold gating modules, called Language Grouped Threshold Gate and Vision Grouped Threshold Gate, which learn grouping thresholds dynamically. Group representations are computed as weighted averages based on similarity scores. To align each token with its group representation, we introduce an Instance Aware Group Alignment loss that operates within each image-text pair, removing the need for external negatives. Finally, a Bidirectional Cross-modal Grouped Alignment module is applied to enhance fine-grained alignment between visual and linguistic group representations. Extensive experiments on public and private datasets show that our method achieves strong performance on image-text retrieval and classification tasks under both fine-tuning and zero-shot settings.