🤖 AI Summary
Existing public brain tumor MRI datasets lack sufficient annotation richness and clinical semantics, hindering models from achieving accurate and interpretable diagnostic reasoning. To address this limitation, this work introduces MM-NeuroOnco, a large-scale multimodal instruction-tuning dataset comprising 24,726 MRI slices and approximately 200,000 semantic instructions, along with a human-evaluated benchmark, MM-NeuroOnco-Bench. The study proposes an innovative multi-model collaborative pipeline for automated medical semantic completion and quality control, and incorporates a refusal-aware open-ended evaluation mechanism to mitigate question-answering bias. Experimental results show that baseline models achieve only 41.88% accuracy on diagnostic tasks, whereas NeuroOnco-GPT, fine-tuned on the proposed dataset, improves performance by 27 percentage points, substantially validating the efficacy of both the dataset and the evaluation framework.
📝 Abstract
Accurate brain tumor diagnosis requires models to not only detect lesions but also generate clinically interpretable reasoning grounded in imaging manifestations, yet existing public datasets remain limited in annotation richness and diagnostic semantics. To bridge this gap, we introduce MM-NeuroOnco, a large-scale multimodal benchmark and instruction-tuning dataset for brain tumor MRI understanding, consisting of 24,726 MRI slices from 20 data sources paired with approximately 200,000 semantically enriched multimodal instructions spanning diverse tumor subtypes and imaging modalities. To mitigate the scarcity and high cost of diagnostic semantic annotations, we develop a multi-model collaborative pipeline for automated medical information completion and quality control, enabling the generation of diagnosis-related semantics beyond mask-only annotations. Building upon this dataset, we further construct MM-NeuroOnco-Bench, a manually annotated evaluation benchmark with a rejection-aware setting to reduce biases inherent in closed-ended question formats. Evaluation across ten representative models shows that even the strongest baseline, Gemini 3 Flash, achieves only 41.88% accuracy on diagnosis-related questions, highlighting the substantial challenges of multimodal brain tumor diagnostic understanding. Leveraging MM-NeuroOnco, we further propose NeuroOnco-GPT, which achieves a 27% absolute accuracy improvement on diagnostic questions following fine-tuning. This result demonstrates the effectiveness of our dataset and benchmark in advancing clinically grounded multimodal diagnostic reasoning. Code and dataset are publicly available at: https://github.com/gfnnnb/MM-NeuroOnco