🤖 AI Summary
This study investigates the modeling relationship between eight learned “principal key filters” in depthwise separable convolutional networks and idealized receptive fields derived from scale-space theory. We propose an ideal filter model based on discrete Gaussian derivative operators, fitted via L1/L2-norm minimization with explicit alignment to scale parameters; spatial extent is characterized using weighted mean-variance analysis. We identify a consistent ~0.5-pixel spatial offset for off-center filters—providing a geometric rationale for separability. Principal key filters are extracted via clustering on ConvNeXt architectures, and experiments demonstrate that the ideal model achieves high-fidelity approximation of real learned filters both qualitatively (structural similarity) and quantitatively (response prediction). The core contribution is the first theoretical linkage established between principal key filters and scale-space differential operators, offering a novel, interpretable modeling paradigm for depthwise separable convolutions.
📝 Abstract
This paper presents the results of analysing and modelling a set of 8 ``master key filters'', which have been extracted by applying a clustering approach to the receptive fields learned in depthwise-separable deep networks based on the ConvNeXt architecture.
For this purpose, we first compute spatial spread measures in terms of weighted mean values and weighted variances of the absolute values of the learned filters, which support the working hypotheses that: (i) the learned filters can be modelled by separable filtering operations over the spatial domain, and that (ii) the spatial offsets of the those learned filters that are non-centered are rather close to half a grid unit. Then, we model the clustered ``master key filters'' in terms of difference operators applied to a spatial smoothing operation in terms of the discrete analogue of the Gaussian kernel, and demonstrate that the resulting idealized models of the receptive fields show good qualitative similarity to the learned filters.
This modelling is performed in two different ways: (i) using possibly different values of the scale parameters in the coordinate directions for each filter, and (ii) using the same value of the scale parameter in both coordinate directions. Then, we perform the actual model fitting by either (i) requiring spatial spread measures in terms of spatial variances of the absolute values of the receptive fields to be equal, or (ii) minimizing the discrete $l_1$- or $l_2$-norms between the idealized receptive field models and the learned filters.
Complementary experimental results then demonstrate the idealized models of receptive fields have good predictive properties for replacing the learned filters by idealized filters in depthwise-separable deep networks, thus showing that the learned filters in depthwise-separable deep networks can be well approximated by discrete scale-space filters.