Revisiting Medical Image Retrieval via Knowledge Consolidation

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address three critical challenges in medical image retrieval—hash representation degradation, weak out-of-distribution (OOD) robustness, and bias in positive/negative sample construction—this paper introduces a novel knowledge-integration paradigm. We propose deep-aware representation fusion (DaRF) for adaptive multi-level feature integration, structural-aware contrastive hashing (SCH), and image-fingerprint-driven dynamic pair construction. Furthermore, we unify content-guided ranking and OOD detection within a single framework—the first such integration. Our approach significantly enhances hash code discriminability and ensures safety-aware, clinically interpretable retrieval. Evaluated on anatomical radiology datasets, it achieves statistically significant mAP improvements of 5.6–38.9% (p < 0.05), while simultaneously enabling high-accuracy OOD identification and clinician-controllable recommendation.

Technology Category

Application Category

📝 Abstract
As artificial intelligence and digital medicine increasingly permeate healthcare systems, robust governance frameworks are essential to ensure ethical, secure, and effective implementation. In this context, medical image retrieval becomes a critical component of clinical data management, playing a vital role in decision-making and safeguarding patient information. Existing methods usually learn hash functions using bottleneck features, which fail to produce representative hash codes from blended embeddings. Although contrastive hashing has shown superior performance, current approaches often treat image retrieval as a classification task, using category labels to create positive/negative pairs. Moreover, many methods fail to address the out-of-distribution (OOD) issue when models encounter external OOD queries or adversarial attacks. In this work, we propose a novel method to consolidate knowledge of hierarchical features and optimisation functions. We formulate the knowledge consolidation by introducing Depth-aware Representation Fusion (DaRF) and Structure-aware Contrastive Hashing (SCH). DaRF adaptively integrates shallow and deep representations into blended features, and SCH incorporates image fingerprints to enhance the adaptability of positive/negative pairings. These blended features further facilitate OOD detection and content-based recommendation, contributing to a secure AI-driven healthcare environment. Moreover, we present a content-guided ranking to improve the robustness and reproducibility of retrieval results. Our comprehensive assessments demonstrate that the proposed method could effectively recognise OOD samples and significantly outperform existing approaches in medical image retrieval (p<0.05). In particular, our method achieves a 5.6-38.9% improvement in mean Average Precision on the anatomical radiology dataset.
Problem

Research questions and friction points this paper is trying to address.

Improves medical image retrieval using hierarchical feature consolidation.
Addresses out-of-distribution issues in medical image retrieval systems.
Enhances robustness and reproducibility of retrieval results in healthcare.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Depth-aware Representation Fusion for feature integration
Structure-aware Contrastive Hashing for pair adaptability
Content-guided ranking for robust retrieval results
🔎 Similar Papers
No similar papers found.
Y
Yang Nan
Huichi Zhou
Huichi Zhou
University College London
AI4Science
X
Xiaodan Xing
G
G. Papanastasiou
L
Lei Zhu
Zhifan Gao
Zhifan Gao
Sun Yat-sen University
Medical Image AnalysisComputer VisionMachine Learning
A
Alejandro F Fangi
G
Guang Yang