🤖 AI Summary
This work addresses the challenges of the curse of dimensionality and limited interpretability in high-dimensional nonlinear functional learning. We propose a hybrid model that integrates a convolutional architecture with a fully connected deep network: the former extracts sparse local features to mitigate the curse of dimensionality, while the latter approximates complex nonlinear functionals. Coupled with a general discretization strategy, this framework enables stable recovery of target functionals from finite samples. Leveraging sparse approximation theory, we develop a sparsity-aware framework compatible with both deterministic and random sampling schemes. Within function spaces exhibiting rapid spectral decay or mixed smoothness, our approach achieves substantially improved approximation rates and significantly reduced sample complexity.
📝 Abstract
Deep neural networks have emerged as powerful tools for learning operators defined over infinite-dimensional function spaces. However, existing theories frequently encounter difficulties related to dimensionality and limited interpretability. This work investigates how sparsity can help address these challenges in functional learning, a central ingredient in operator learning. We propose a framework that employs convolutional architectures to extract sparse features from a finite number of samples, together with deep fully connected networks to effectively approximate nonlinear functionals. Using universal discretization methods, we show that sparse approximators enable stable recovery from discrete samples. In addition, both the deterministic and the random sampling schemes are sufficient for our analysis. These findings lead to improved approximation rates and reduced sample sizes in various function spaces, including those with fast frequency decay and mixed smoothness. They also provide new theoretical insights into how sparsity can alleviate the curse of dimensionality in functional learning.