π€ AI Summary
Current AI systems for medical imaging lack interpretable, multimodal agents capable of visual-language understanding and interactive reasoning. To address this, we propose AURAβthe first multimodal intelligent agent specifically designed for medical image understanding, reasoning, and annotation. AURA pioneers the integration of large language model (LLM)-driven agent architectures into medical imaging, enabling dynamic hypothesis testing, context-aware explanation generation, and interactive clinical decision support. Built upon Qwen-32B, it incorporates anatomical segmentation, pathological localization, counterfactual image generation, and pixel-level discrepancy assessment modules. Experimental evaluation demonstrates substantial improvements in diagnostic relevance (+18.7% clinical consistency) and visual interpretability (physician readability score β32%). By bridging semantic reasoning with pixel-level analysis, AURA advances clinically aligned, trustworthy AI-assisted diagnosis and interpretation.
π Abstract
Recent advancements in Large Language Models (LLMs) have catalyzed a paradigm shift from static prediction systems to agentic AI agents capable of reasoning, interacting with tools, and adapting to complex tasks. While LLM-based agentic systems have shown promise across many domains, their application to medical imaging remains in its infancy. In this work, we introduce AURA, the first visual linguistic explainability agent designed specifically for comprehensive analysis, explanation, and evaluation of medical images. By enabling dynamic interactions, contextual explanations, and hypothesis testing, AURA represents a significant advancement toward more transparent, adaptable, and clinically aligned AI systems. We highlight the promise of agentic AI in transforming medical image analysis from static predictions to interactive decision support. Leveraging Qwen-32B, an LLM-based architecture, AURA integrates a modular toolbox comprising: (i) a segmentation suite with phase grounding, pathology segmentation, and anatomy segmentation to localize clinically meaningful regions; (ii) a counterfactual image-generation module that supports reasoning through image-level explanations; and (iii) a set of evaluation tools including pixel-wise difference-map analysis, classification, and advanced state-of-the-art components to assess diagnostic relevance and visual interpretability.