🤖 AI Summary
This work addresses the challenges of 3D medical image understanding within multimodal large language models, including high computational costs, disruption of volumetric continuity, and the tendency to overlook subtle lesions. To tackle these issues, the authors propose an adaptive compression framework based on variable-length token sequences. The framework employs an instruction-conditioned token scheduling mechanism to dynamically preserve critical visual information and introduces a differentiable discrete token dropping strategy coupled with a surrogate gradient propagation rule, enabling efficient token compression during both training and inference. Additionally, a regularization objective is incorporated to mitigate language bias. The method achieves state-of-the-art performance across multiple medical visual question answering benchmarks while substantially reducing computational resource consumption, effectively balancing accuracy and efficiency.
📝 Abstract
Multimodal large language models are promising for clinical visual question answering tasks, but scaling to 3D imaging is hindered by high computational costs. Prior methods often rely on 2D slices or fixed-length token compression, disrupting volumetric continuity and obscuring subtle findings. We present Photon, a framework that represents 3D medical volumes with token sequences of variable length. Photon introduces instruction-conditioned token scheduling and surrogate gradient propagation to adaptively reduce tokens during both training and inference, which lowers computational cost while mitigating the attention dilution caused by redundant tokens. It incorporates a custom backpropagation rule with gradient restoration to enable differentiable optimization despite discrete token drop. To stabilize token compression and ensure reliable use of visual evidence, Photon further applies regularization objectives that mitigate language-only bias and improve reliability. Experiments on diverse medical visual question answering tasks show that Photon achieves state-of-the-art accuracy while reducing resource usage and accelerating both training and inference.