🤖 AI Summary
This study addresses the lack of asymptotic analysis for the output behavior of fully convolutional networks (FCNs) when applied to time series inputs. Focusing on time series generated by short-memory linear processes, this work establishes the first theoretical result demonstrating that the output of an FCN equipped with global average pooling (GAP) converges in distribution to a Gaussian as the sequence length tends to infinity, thereby providing a rigorous foundation for its asymptotic normality. Building upon this insight, the authors propose a learnable global weighted pooling layer whose weights exhibit slow variation over time, effectively enhancing the representational capacity beyond conventional GAP. This advancement offers both theoretical grounding and architectural improvement for deep learning models in time series analysis.
📝 Abstract
Deep learning is widely deployed for time series learning tasks such as classification and forecasting. Despite the empirical successes, only little theory has been developed so far in the time series context. In this work, we prove that if the network inputs are generated from short-range dependent linear processes, the outputs of fully convolutional neural networks (FCNs) with global average pooling (GAP) are asymptotically Gaussian and the limit is attained if the length of the observed time series tends to infinity. The proof leverages existing tools from the theoretical time series literature. Based on our theory, we propose a generalization of the GAP layer by considering a global weighted pooling step with slowly varying, learnable coefficients.