🤖 AI Summary
This work addresses the severe artifacts in CT imaging caused by highly attenuating metallic implants, which often obscure critical anatomical structures. While existing deep learning approaches rely heavily on large-scale paired datasets—limiting their practicality—this study pioneers the integration of a general-purpose vision-language diffusion foundation model for metal artifact reduction. The method employs low-rank adaptation (LoRA) fine-tuning and introduces a multi-reference conditioning mechanism that leverages clean anatomical examples from unrelated subjects to guide reconstruction, effectively suppressing hallucinations. Requiring only 16–128 paired training cases, the approach achieves state-of-the-art performance in both perceptual quality and radiological fidelity on the AAPM CT-MAR benchmark, reducing data requirements by two orders of magnitude while significantly enhancing reconstruction accuracy and interpretability.
📝 Abstract
Metal artifacts from high-attenuation implants severely degrade CT image quality, obscuring critical anatomical structures and posing a challenge for standard deep learning methods that require extensive paired training data. We propose a paradigm shift: reframing artifact reduction as an in-context reasoning task by adapting a general-purpose vision-language diffusion foundation model via parameter-efficient Low-Rank Adaptation (LoRA). By leveraging rich visual priors, our approach achieves effective artifact suppression with only 16 to 128 paired training examples reducing data requirements by two orders of magnitude. Crucially, we demonstrate that domain adaptation is essential for hallucination mitigation; without it, foundation models interpret streak artifacts as erroneous natural objects (e.g., waffles or petri dishes). To ground the restoration, we propose a multi-reference conditioning strategy where clean anatomical exemplars from unrelated subjects are provided alongside the corrupted input, enabling the model to exploit category-specific context to infer uncorrupted anatomy. Extensive evaluation on the AAPM CT-MAR benchmark demonstrates that our method achieves state-of-the-art performance on perceptual and radiological-feature metrics . This work establishes that foundation models, when appropriately adapted, offer a scalable alternative for interpretable, data-efficient medical image reconstruction. Code is available at https://github.com/ahmetemirdagi/CT-EditMAR.