AnatomiX, an Anatomy-Aware Grounded Multimodal Large Language Model for Chest X-Ray Interpretation

πŸ“… 2026-01-06
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the lack of precise anatomical spatial correspondence in existing multimodal large models for chest X-ray interpretation, which often leads to erroneous anatomical understanding. To overcome this limitation, the authors propose AnatomiX, the first framework to explicitly integrate an anatomy-aware mechanism into a multimodal large language model by emulating the radiologist workflow: the first stage accurately identifies anatomical structures and extracts region-specific features, while the second stage leverages a multitask large language model to support downstream tasks such as phrase grounding, report generation, and visual question answering. Experimental results demonstrate that AnatomiX achieves performance improvements exceeding 25% across multiple benchmarks in anatomical localization, phrase grounding, localized diagnosis, and descriptive tasks, thereby enabling truly anatomy-aligned reasoning.

Technology Category

Application Category

πŸ“ Abstract
Multimodal medical large language models have shown impressive progress in chest X-ray interpretation but continue to face challenges in spatial reasoning and anatomical understanding. Although existing grounding techniques improve overall performance, they often fail to establish a true anatomical correspondence, resulting in incorrect anatomical understanding in the medical domain. To address this gap, we introduce AnatomiX, a multitask multimodal large language model explicitly designed for anatomically grounded chest X-ray interpretation. Inspired by the radiological workflow, AnatomiX adopts a two stage approach: first, it identifies anatomical structures and extracts their features, and then leverages a large language model to perform diverse downstream tasks such as phrase grounding, report generation, visual question answering, and image understanding. Extensive experiments across multiple benchmarks demonstrate that AnatomiX achieves superior anatomical reasoning and delivers over 25% improvement in performance on anatomy grounding, phrase grounding, grounded diagnosis and grounded captioning tasks compared to existing approaches. Code and pretrained model are available at https://github.com/aneesurhashmi/anatomix
Problem

Research questions and friction points this paper is trying to address.

anatomical understanding
spatial reasoning
chest X-ray interpretation
anatomical correspondence
multimodal large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

anatomy-aware
multimodal large language model
chest X-ray interpretation
phrase grounding
anatomical reasoning
πŸ”Ž Similar Papers
No similar papers found.