🤖 AI Summary
In heterogeneous multi-task linear regression, tasks exhibit distinct yet partially overlapping sparsity patterns (support sets) and heterogeneous nonzero coefficient values, posing challenges for joint modeling. Method: We propose the first heterogeneous sparse multi-task learning (MTL) framework enabling independent sharing of support sets and nonzero coefficient values across tasks. Our approach introduces a novel mixed-integer programming (MIP)-based formulation and develops a customized algorithm integrating block coordinate descent, combinatorial local search, and exact optimization—ensuring both global optimality and scalability. Contribution/Results: We establish theoretical guarantees on enhanced variable selection consistency. Extensive simulations and two biomedical case studies demonstrate substantial performance gains over existing sparse MTL methods. The corresponding open-source R package, sMTL, is publicly available on CRAN.
📝 Abstract
We consider a problem in Multi-Task Learning (MTL) where multiple linear models are jointly trained on a collection of datasets ("tasks"). A key novelty of our framework is that it allows the sparsity pattern of regression coefficients and the values of non-zero coefficients to differ across tasks while still leveraging partially shared structure. Our methods encourage models to share information across tasks through separately encouraging 1) coefficient supports, and/or 2) nonzero coefficient values to be similar. This allows models to borrow strength during variable selection even when non-zero coefficient values differ across tasks. We propose a novel mixed-integer programming formulation for our estimator. We develop custom scalable algorithms based on block coordinate descent and combinatorial local search to obtain high-quality (approximate) solutions for our estimator. Additionally, we propose a novel exact optimization algorithm to obtain globally optimal solutions. We investigate the theoretical properties of our estimators. We formally show how our estimators leverage the shared support information across tasks to achieve better variable selection performance. We evaluate the performance of our methods in simulations and two biomedical applications. Our proposed approaches appear to outperform other sparse MTL methods in variable selection and prediction accuracy. We provide the sMTL package on CRAN.