🤖 AI Summary
In multi-focus image fusion (MFIF), blurred decision map boundaries critically limit fusion quality. To address this, we propose a neurodynamics-driven coupled neural P system—the first to integrate third-generation spiking neural models into MFIF. Our method maps source images to interpretable spike matrices and directly generates high-precision decision maps via quantitative spike-count comparison, eliminating post-processing. Crucially, it explicitly models the constrained dynamical relationship between neuronal states and input signals, suppressing anomalous sustained spiking to ensure accurate discrimination between focused and defocused regions. This end-to-end, interpretable framework achieves state-of-the-art performance on four major benchmarks—Lytro, MFFW, MFI-WHU, and Real-MFF—demonstrating significant improvements in boundary sharpness and visual fidelity while preserving structural and textural details.
📝 Abstract
Multi-focus image fusion (MFIF) is a crucial technique in image processing, with a key challenge being the generation of decision maps with precise boundaries. However, traditional methods based on heuristic rules and deep learning methods with black-box mechanisms are difficult to generate high-quality decision maps. To overcome this challenge, we introduce neurodynamics-driven coupled neural P (CNP) systems, which are third-generation neural computation models inspired by spiking mechanisms, to enhance the accuracy of decision maps. Specifically, we first conduct an in-depth analysis of the model's neurodynamics to identify the constraints between the network parameters and the input signals. This solid analysis avoids abnormal continuous firing of neurons and ensures the model accurately distinguishes between focused and unfocused regions, generating high-quality decision maps for MFIF. Based on this analysis, we propose a extbf{N}eurodynamics- extbf{D}riven extbf{CNP} extbf{F}usion model ( extbf{ND-CNPFuse}) tailored for the challenging MFIF task. Unlike current ideas of decision map generation, ND-CNPFuse distinguishes between focused and unfocused regions by mapping the source image into interpretable spike matrices. By comparing the number of spikes, an accurate decision map can be generated directly without any post-processing. Extensive experimental results show that ND-CNPFuse achieves new state-of-the-art performance on four classical MFIF datasets, including Lytro, MFFW, MFI-WHU, and Real-MFF. The code is available at https://github.com/MorvanLi/ND-CNPFuse.