Deep-and-Wide Learning: Enhancing Data-Driven Inference via Synergistic Learning of Inter- and Intra-Data Representations

πŸ“… 2025-01-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Deep neural networks (DNNs) suffer from heavy dependence on large-scale data and computational resources, leading to severe performance degradation under few-shot settings. To address this, we propose Deep–Wide Learning (DWL), a novel framework introducing an intra-/inter-data dual-representation co-learning paradigm. DWL leverages Bayesian-driven low-dimensional statistical modeling to extract cross-sample commonalities, while preserving high-dimensional instance-specific features; these complementary representations are jointly optimized via a Dual-Interaction Network (D-Net). Empirically, DWL substantially reduces reliance on big data and high compute, achieving state-of-the-art accuracy on diverse few-shot classification and regression benchmarks. Crucially, it cuts computational overhead by one to two orders of magnitude compared to existing methods. By unifying statistical generalization with instance-level expressivity, DWL establishes a new paradigm for robust, resource-efficient modeling in data- and compute-constrained scenarios.

Technology Category

Application Category

πŸ“ Abstract
Advancements in deep learning are revolutionizing science and engineering. The immense success of deep learning is largely due to its ability to extract essential high-dimensional (HD) features from input data and make inference decisions based on this information. However, current deep neural network (DNN) models face several challenges, such as the requirements of extensive amounts of data and computational resources. Here, we introduce a new learning scheme, referred to as deep-and-wide learning (DWL), to systematically capture features not only within individual input data (intra-data features) but also across the data (inter-data features). Furthermore, we propose a dual-interactive-channel network (D-Net) to realize the DWL, which leverages our Bayesian formulation of low-dimensional (LD) inter-data feature extraction and its synergistic interaction with the conventional HD representation of the dataset, for substantially enhanced computational efficiency and inference. The proposed technique has been applied to data across various disciplines for both classification and regression tasks. Our results demonstrate that DWL surpasses state-of-the-art DNNs in accuracy by a substantial margin with limited training data and improves the computational efficiency by order(s) of magnitude. The proposed DWL strategy dramatically alters the data-driven learning techniques, including emerging large foundation models, and sheds significant insights into the evolving field of AI.
Problem

Research questions and friction points this paper is trying to address.

Deep Neural Networks
Data Scarcity
Computational Resources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep and Wide Learning (DWL)
Dual Interaction Channel Network (D-Net)
Efficiency and Accuracy in Limited Data
πŸ”Ž Similar Papers
No similar papers found.