🤖 AI Summary
This work addresses the challenge of multimodal retrieval between images and clinical text in skin cancer diagnosis and treatment by proposing a Transformer-based global–local joint alignment framework. The method integrates global semantic supervision with a multi-scale spatial attention mechanism and incorporates a clinical prior–driven convex weighting strategy to enhance both discriminative region alignment and the interpretability of similarity computation. Experimental results on the Derm7pt dataset demonstrate that the proposed approach significantly outperforms existing state-of-the-art methods, enabling efficient and accurate case retrieval to support clinical decision-making, medical education, and quality assurance scenarios.
📝 Abstract
Medical image retrieval aims to identify clinically relevant lesion cases to support diagnostic decision making, education, and quality control. In practice, retrieval queries often combine a reference lesion image with textual descriptors such as dermoscopic features. We study composed vision-language retrieval for skin cancer, where each query consists of an image to text pair and the database contains biopsy-confirmed, multi-class disease cases. We propose a transformer based framework that learns hierarchical composed query representations and performs joint global-local alignment between queries and candidate images. Local alignment aggregates discriminative regions via multiple spatial attention masks, while global alignment provides holistic semantic supervision. The final similarity is computed through a convex, domain-informed weighting that emphasizes clinically salient local evidence while preserving global consistency. Experiments on the public Derm7pt dataset demonstrate consistent improvements over state-of-the-art methods. The proposed framework enables efficient access to relevant medical records and supports practical clinical deployment.