๐ค AI Summary
This work addresses the limitation of conventional normalized difference indices, which are typically applied as fixed preprocessing steps and thus lack adaptability to specific learning tasks. The authors propose a differentiable, learnable normalized difference layer that integrates classical spectral indices into deep neural networks, enabling data-driven optimization of band weights through end-to-end training. To ensure positive coefficients and a bounded denominator, the method employs Softplus reparameterization, which also accommodates signed inputs while preserving illumination invariance and output boundedness. Experimental results demonstrate that the proposed model achieves classification accuracy comparable to that of a standard MLP with approximately 75% fewer parameters, and exhibits remarkable robustnessโits accuracy drops by only 0.17% under 10% multiplicative noise, significantly outperforming baseline approaches.
๐ Abstract
Normalized difference indices have been a staple in remote sensing for decades. They stay reliable under lighting changes produce bounded values and connect well to biophysical signals. Even so, they are usually treated as a fixed pre processing step with coefficients set to one, which limits how well they can adapt to a specific learning task. In this study, we introduce the Normalized Difference Layer that is a differentiable neural network module. The proposed method keeps the classical idea but learns the band coefficients from data. We present a complete mathematical framework for integrating this layer into deep learning architectures that uses softplus reparameterization to ensure positive coefficients and bounded denominators. We describe forward and backward pass algorithms enabling end to end training through backpropagation. This approach preserves the key benefits of normalized differences, namely illumination invariance and outputs bounded to $[-1,1]$ while allowing gradient descent to discover task specific band weightings. We extend the method to work with signed inputs, so the layer can be stacked inside larger architectures. Experiments show that models using this layer reach similar classification accuracy to standard multilayer perceptrons while using about 75\% fewer parameters. They also handle multiplicative noise well, at 10\% noise accuracy drops only 0.17\% versus 3.03\% for baseline MLPs. The learned coefficient patterns stay consistent across different depths.