3DMedAgent: Unified Perception-to-Understanding for 3D Medical Analysis

๐Ÿ“… 2026-02-20
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of bridging low-level perception and high-level clinical reasoning in 3D medical image analysis, a task hindered by the inherent limitations of prevailing 2D-based multimodal large language models (MLLMs) in handling volumetric data. To overcome this, the authors propose 3DMedAgent, a unified intelligent agent that orchestrates heterogeneous vision and text tools to establish a multi-step reasoning pipelineโ€”from global 3D volumes to local informative 2D slices and finally to structured textual outputs. Notably, 3DMedAgent enables off-the-shelf 2D MLLMs to perform general-purpose 3D medical analysis without any 3D fine-tuning, augmented by a long-term structured memory mechanism that supports evidence-driven, query-adaptive reasoning. Extensive experiments across more than 40 tasks demonstrate its significant superiority over existing general-purpose, medical-specific, and 3D-specialized models. The study also introduces DeepChestVQA, a new benchmark for evaluating integrated perception-to-understanding capabilities in 3D chest imaging.

Technology Category

Application Category

๐Ÿ“ Abstract
3D CT analysis spans a continuum from low-level perception to high-level clinical understanding. Existing 3D-oriented analysis methods adopt either isolated task-specific modeling or task-agnostic end-to-end paradigms to produce one-hop outputs, impeding the systematic accumulation of perceptual evidence for downstream reasoning. In parallel, recent multimodal large language models (MLLMs) exhibit improved visual perception and can integrate visual and textual information effectively, yet their predominantly 2D-oriented designs fundamentally limit their ability to perceive and analyze volumetric medical data. To bridge this gap, we propose 3DMedAgent, a unified agent that enables 2D MLLMs to perform general 3D CT analysis without 3D-specific fine-tuning. 3DMedAgent coordinates heterogeneous visual and textual tools through a flexible MLLM agent, progressively decomposing complex 3D analysis into tractable subtasks that transition from global to regional views, from 3D volumes to informative 2D slices, and from visual evidence to structured textual representations. Central to this design, 3DMedAgent maintains a long-term structured memory that aggregates intermediate tool outputs and supports query-adaptive, evidence-driven multi-step reasoning. We further introduce the DeepChestVQA benchmark for evaluating unified perception-to-understanding capabilities in 3D thoracic imaging. Experiments across over 40 tasks demonstrate that 3DMedAgent consistently outperforms general, medical, and 3D-specific MLLMs, highlighting a scalable path toward general-purpose 3D clinical assistants.Code and data are available at \href{https://github.com/jinlab-imvr/3DMedAgent}{https://github.com/jinlab-imvr/3DMedAgent}.
Problem

Research questions and friction points this paper is trying to address.

3D medical analysis
perception-to-understanding
multimodal large language models
volumetric data
clinical reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

3DMedAgent
multimodal large language models
3D medical analysis
structured memory
perception-to-understanding
๐Ÿ”Ž Similar Papers
No similar papers found.