🤖 AI Summary
Medical AI assistants suffer from low multimodal content accuracy and insufficient clinical validation. To address these challenges, we propose RCMed, a full-stack medical AI assistant featuring a novel vision–language bidirectional reinforcement closed-loop alignment mechanism and a color-region description strategy, enabling cross-scale joint representation of shape, spatial location, and textual semantics—thereby significantly improving contextual understanding of irregular lesions and subtle boundaries. Trained on 20 million image–mask–description triplets, RCMed integrates hierarchical vision–language grounding, pixel-level semantic-guided attention, and multimodal self-supervised reinforcement learning. It supports nine imaging modalities and 165 clinical tasks, achieving a 23.5% relative improvement in cell-level microscopic image segmentation. External validation spans 20 cancer types, with multiple metrics attaining state-of-the-art performance and demonstrating unprecedented generalization in real-world clinical settings.
📝 Abstract
Medical AI assistants support doctors in disease diagnosis, medical image analysis, and report generation. However, they still face significant challenges in clinical use, including limited accuracy with multimodal content and insufficient validation in real-world settings. We propose RCMed, a full-stack AI assistant that improves multimodal alignment in both input and output, enabling precise anatomical delineation, accurate localization, and reliable diagnosis through hierarchical vision-language grounding. A self-reinforcing correlation mechanism allows visual features to inform language context, while language semantics guide pixel-wise attention, forming a closed loop that refines both modalities. This correlation is enhanced by a color region description strategy, translating anatomical structures into semantically rich text to learn shape-location-text relationships across scales. Trained on 20 million image-mask-description triplets, RCMed achieves state-of-the-art precision in contextualizing irregular lesions and subtle anatomical boundaries, excelling in 165 clinical tasks across 9 modalities. It achieved a 23.5% relative improvement in cell segmentation from microscopy images over prior methods. RCMed's strong vision-language alignment enables exceptional generalization, with state-of-the-art performance in external validation across 20 clinically significant cancer types, including novel tasks. This work demonstrates how integrated multimodal models capture fine-grained patterns, enabling human-level interpretation in complex scenarios and advancing human-centric AI healthcare.