🤖 AI Summary
This work addresses the challenge that existing infrared and visible image fusion methods struggle to simultaneously satisfy diverse downstream semantic task requirements. To this end, the authors propose a Closed-Loop Dynamic Network (CLDyN) that incorporates a Requirement-driven Semantic Compensation (RSC) mechanism, explicitly integrating task-specific feedback into the fusion process to dynamically adjust fusion strategies. CLDyN establishes a semantic transmission chain by leveraging a Basis Vector Bank (BVB) and an Architecture-Adaptive Semantic Injection (A²SI) module, enabling task-customized fusion without retraining. A reward-punishment optimization strategy is further introduced to guide semantic compensation effectively. Experiments on the M3FD, FMB, and VT5000 datasets demonstrate that CLDyN maintains high fusion quality while significantly enhancing multi-task adaptability.
📝 Abstract
Infrared-visible image fusion aims to integrate complementary information for robust visual understanding, but existing fusion methods struggle with simultaneously adapting to multiple downstream tasks. To address this issue, we propose a Closed-Loop Dynamic Network (CLDyN) that can adaptively respond to the semantic requirements of diverse downstream tasks for task-customized image fusion. Specifically, CLDyN introduces a closed-loop optimization mechanism that establishes a semantic transmission chain to achieve explicit feedback from downstream tasks to the fusion network through a Requirement-driven Semantic Compensation (RSC) module. The RSC module leverages a Basis Vector Bank (BVB) and an Architecture-Adaptive Semantic Injection (A2SI) block to customize the network architecture according to task requirements, thereby enabling task-specific semantic compensation and allowing the fusion network to actively adapt to diverse tasks without retraining. To promote semantic compensation, a reward-penalty strategy is introduced to reward or penalize the RSC module based on task performance variations. Experiments on the M3FD, FMB, and VT5000 datasets demonstrate that CLDyN not only maintains high fusion quality but also exhibits strong multi-task adaptability. The code is available at https://github.com/YR0211/CLDyN.