🤖 AI Summary
In infrared and visible image fusion, existing gradient-magnitude-based loss functions neglect gradient direction information, leading to edge blurring and imprecise supervision. To address this, we propose the Direction-Aware Multi-scale Gradient Loss (DAMGL), the first loss that explicitly models horizontal and vertical gradient components—including their signs—within a multi-scale framework, thereby enforcing cross-scale directional consistency. DAMGL requires no architectural modifications or adjustments to training procedures and can be seamlessly integrated as a plug-and-play module into mainstream deep learning frameworks. Extensive experiments across multiple benchmark datasets and open-source models demonstrate that DAMGL significantly enhances edge sharpness and texture fidelity in fused images, outperforming conventional gradient-magnitude losses. By incorporating explicit directional priors into gradient supervision, DAMGL establishes a more accurate, direction-aware supervision paradigm for image fusion.
📝 Abstract
Infrared and visible image fusion aims to integrate complementary information from co-registered source images to produce a single, informative result. Most learning-based approaches train with a combination of structural similarity loss, intensity reconstruction loss, and a gradient-magnitude term. However, collapsing gradients to their magnitude removes directional information, yielding ambiguous supervision and suboptimal edge fidelity. We introduce a direction-aware, multi-scale gradient loss that supervises horizontal and vertical components separately and preserves their sign across scales. This axis-wise, sign-preserving objective provides clear directional guidance at both fine and coarse resolutions, promoting sharper, better-aligned edges and richer texture preservation without changing model architectures or training protocols. Experiments on open-source model and multiple public benchmarks demonstrate effectiveness of our approach.