🤖 AI Summary
This work addresses three core challenges in flexible K→N modality synthesis for medical imaging: heterogeneous modality contributions, degradation risk in fusion quality, and multi-output identity inconsistency. We propose the first quality-driven selective fusion framework. Methodologically, we design dynamic weighting mechanisms (PreWeightNet/ThresholdNet/EffiWeightNet) to model modality-specific task adaptability; introduce a Causal Modality Identity Constraint (CMIM) to ensure generation consistency across outputs; and adopt the SAM2 sequential paradigm for controllable fusion. Our key innovation lies in the first joint integration of quality-aware modality selection and causal identity modeling into the multimodal generation pipeline, enabling arbitrary input–output modality combinations. Extensive experiments on multiple public benchmarks demonstrate significant improvements in synthesis quality, modality fidelity, and downstream task performance—consistently surpassing state-of-the-art methods.
📝 Abstract
Cross-modal medical image synthesis research focuses on reconstructing missing imaging modalities from available ones to support clinical diagnosis. Driven by clinical necessities for flexible modality reconstruction, we explore K to N medical generation, where three critical challenges emerge: How can we model the heterogeneous contributions of different modalities to various target tasks? How can we ensure fusion quality control to prevent degradation from noisy information? How can we maintain modality identity consistency in multi-output generation? Driven by these clinical necessities, and drawing inspiration from SAM2's sequential frame paradigm and clinicians' progressive workflow of incrementally adding and selectively integrating multi-modal information, we treat multi-modal medical data as sequential frames with quality-driven selection mechanisms. Our key idea is to "learn" adaptive weights for each modality-task pair and "memorize" beneficial fusion patterns through progressive enhancement. To achieve this, we design three collaborative modules: PreWeightNet for global contribution assessment, ThresholdNet for adaptive filtering, and EffiWeightNet for effective weight computation. Meanwhile, to maintain modality identity consistency, we propose the Causal Modality Identity Module (CMIM) that establishes causal constraints between generated images and target modality descriptions using vision-language modeling. Extensive experimental results demonstrate that our proposed Med-K2N outperforms state-of-the-art methods by significant margins on multiple benchmarks. Source code is available.