On the Dataless Training of Neural Networks

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses neural network optimization in the absence of training data—particularly for combinatorial optimization, inverse problems, and PDE solving where labeled data are scarce or unavailable. We propose a data-agnostic neural reparameterization framework: instead of relying on any training samples, it structurally models optimization variables via purpose-designed architectures (e.g., MLPs, CNNs, GNNs, or quadratic neural networks) and integrates classical optimization theory for end-to-end solution inference. The framework unifies architecture-independent priors with task-specific inductive biases, explicitly distinguishing itself from zero-shot learning and optimization-based meta-learning. Experiments demonstrate that our method significantly outperforms conventional optimizers and data-driven baselines on data-scarce tasks such as medical image reconstruction, exhibiting strong generalization and practical efficacy. This advances the theoretical foundations and scientific computing applications of data-free neural optimization paradigms.

Technology Category

Application Category

📝 Abstract
This paper surveys studies on the use of neural networks for optimization in the training-data-free setting. Specifically, we examine the dataless application of neural network architectures in optimization by re-parameterizing problems using fully connected (or MLP), convolutional, graph, and quadratic neural networks. Although MLPs have been used to solve linear programs a few decades ago, this approach has recently gained increasing attention due to its promising results across diverse applications, including those based on combinatorial optimization, inverse problems, and partial differential equations. The motivation for this setting stems from two key (possibly over-lapping) factors: (i) data-driven learning approaches are still underdeveloped and have yet to demonstrate strong results, as seen in combinatorial optimization, and (ii) the availability of training data is inherently limited, such as in medical image reconstruction and other scientific applications. In this paper, we define the dataless setting and categorize it into two variants based on how a problem instance -- defined by a single datum -- is encoded onto the neural network: (i) architecture-agnostic methods and (ii) architecture-specific methods. Additionally, we discuss similarities and clarify distinctions between the dataless neural network (dNN) settings and related concepts such as zero-shot learning, one-shot learning, lifting in optimization, and over-parameterization.
Problem

Research questions and friction points this paper is trying to address.

Surveying neural networks for optimization without training data
Examining dataless applications across diverse neural architectures
Addressing limitations of data-driven methods in scientific applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural networks re-parameterize optimization problems without data
Uses MLP, convolutional, graph, and quadratic neural architectures
Categorizes methods into architecture-agnostic and architecture-specific approaches
🔎 Similar Papers
No similar papers found.