Ensemble Prediction of Task Affinity for Efficient Multi-Task Learning

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost of exhaustively evaluating all possible task combinations to identify optimal joint training strategies in multi-task learning. To this end, the authors propose the ETAP framework, which uniquely integrates a gradient-similarity-based linear task affinity score with a data-driven, nonlinear residual correction predictor to construct a scalable ensemble model capable of effectively capturing complex nonlinear inter-task relationships. This approach significantly improves the accuracy of predicting multi-task performance gains and enables more efficient and effective task grouping strategies. Empirical evaluations across multiple benchmark datasets demonstrate that ETAP consistently outperforms existing methods in both prediction fidelity and resulting model performance.

Technology Category

Application Category

📝 Abstract
A fundamental problem in multi-task learning (MTL) is identifying groups of tasks that should be learned together. Since training MTL models for all possible combinations of tasks is prohibitively expensive for large task sets, a crucial component of efficient and effective task grouping is predicting whether a group of tasks would benefit from learning together, measured as per-task performance gain over single-task learning. In this paper, we propose ETAP (Ensemble Task Affinity Predictor), a scalable framework that integrates principled and data-driven estimators to predict MTL performance gains. First, we consider the gradient-based updates of shared parameters in an MTL model to measure the affinity between a pair of tasks as the similarity between the parameter updates based on these tasks. This linear estimator, which we call affinity score, naturally extends to estimating affinity within a group of tasks. Second, to refine these estimates, we train predictors that apply non-linear transformations and correct residual errors, capturing complex and non-linear task relationships. We train these predictors on a limited number of task groups for which we obtain ground-truth gain values via multi-task learning for each group. We demonstrate on benchmark datasets that ETAP improves MTL gain prediction and enables more effective task grouping, outperforming state-of-the-art baselines across diverse application domains.
Problem

Research questions and friction points this paper is trying to address.

multi-task learning
task grouping
performance gain prediction
task affinity
efficient learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-task learning
task affinity prediction
gradient similarity
ensemble prediction
non-linear correction
🔎 Similar Papers
No similar papers found.
A
Afiya Ayman
Pennsylvania State University
A
Ayan Mukhopadhyay
College of William & Mary
Aron Laszka
Aron Laszka
Assistant Professor, Pennsylvania State University
Artificial IntelligenceMachine LearningCyber-Physical SystemsCybersecurity