Cross-Learning from Scarce Data via Multi-Task Constrained Optimization

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inaccurate parameter estimation and poor generalization in few-shot learning, this paper proposes a multi-task cross-learning framework that jointly estimates deterministic parameters of related tasks as an optimization problem with explicit similarity constraints—enabling knowledge transfer while allowing controlled inter-task parameter divergence. Under Gaussian assumptions, theoretical analysis coupled with empirical validation demonstrates that the method effectively fuses multi-source information, improving both parameter inference accuracy and robustness. Key contributions include: (1) the first explicit incorporation of parameter similarity into multi-task constrained optimization; and (2) theoretically guaranteed reliable inference under few-shot conditions. Extensive experiments on real-world tasks—including image classification and infectious disease spread prediction—show significant improvements over state-of-the-art baselines, confirming the method’s effectiveness and cross-domain applicability.

Technology Category

Application Category

📝 Abstract
A learning task, understood as the problem of fitting a parametric model from supervised data, fundamentally requires the dataset to be large enough to be representative of the underlying distribution of the source. When data is limited, the learned models fail generalize to cases not seen during training. This paper introduces a multi-task emph{cross-learning} framework to overcome data scarcity by jointly estimating emph{deterministic} parameters across multiple, related tasks. We formulate this joint estimation as a constrained optimization problem, where the constraints dictate the resulting similarity between the parameters of the different models, allowing the estimated parameters to differ across tasks while still combining information from multiple data sources. This framework enables knowledge transfer from tasks with abundant data to those with scarce data, leading to more accurate and reliable parameter estimates, providing a solution for scenarios where parameter inference from limited data is critical. We provide theoretical guarantees in a controlled framework with Gaussian data, and show the efficiency of our cross-learning method in applications with real data including image classification and propagation of infectious diseases.
Problem

Research questions and friction points this paper is trying to address.

Overcoming data scarcity in learning tasks via multi-task optimization
Enabling knowledge transfer from data-rich to data-poor tasks
Improving parameter estimation accuracy with limited training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-task cross-learning framework for data scarcity
Constrained optimization for joint parameter estimation
Knowledge transfer from abundant to scarce data tasks
🔎 Similar Papers
L
Leopoldo Agorio
Department of Electrical Engineering, School of Engineering, Universidad de la República, Montevideo, 11300, Uruguay
J
Juan Cerviño
Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Miguel Calvo-Fullana
Miguel Calvo-Fullana
Universitat Pompeu Fabra
Autonomous SystemsWireless CommunicationOptimizationRobotics
Alejandro Ribeiro
Alejandro Ribeiro
University of Pennsylvania
Signal processingNetwork TheoryOptimization
J
Juan Andrés Bazerque
Department of Electrical Engineering, School of Engineering, Universidad de la República, Montevideo, 11300, Uruguay