🤖 AI Summary
This work addresses the lack of precise theoretical characterization regarding when and how auxiliary data improves generalization in transfer learning. Under two linear settings—ordinary least squares regression and underparameterized linear neural networks—the authors derive closed-form expressions or non-asymptotic bounds for the target task’s generalization error. By integrating bias-variance decomposition with column-wise low-rank perturbation analysis, they establish the first non-vacuous sufficient conditions for beneficial auxiliary learning in linear neural networks. Furthermore, they provide necessary and sufficient conditions for auxiliary tasks to enhance generalization in linear regression, along with a tractable optimization framework for computing globally optimal task weights. The theoretical findings are validated on synthetic data, and a consistent empirical estimator for task weights is proposed.
📝 Abstract
In transfer learning, the learner leverages auxiliary data to improve generalization on a main task. However, the precise theoretical understanding of when and how auxiliary data help remains incomplete. We provide new insights on this issue in two canonical linear settings: ordinary least squares regression and under-parameterized linear neural networks. For linear regression, we derive exact closed-form expressions for the expected generalization error with bias-variance decomposition, yielding necessary and sufficient conditions for auxiliary tasks to improve generalization on the main task. We also derive globally optimal task weights as outputs of solvable optimization programs, with consistency guarantees for empirical estimates. For linear neural networks with shared representations of width $q \leq K$, where $K$ is the number of auxiliary tasks, we derive a non-asymptotic expectation bound on the generalization error, yielding the first non-vacuous sufficient condition for beneficial auxiliary learning in this setting, as well as principled directions for task weight curation. We achieve this by proving a new column-wise low-rank perturbation bound for random matrices, which improves upon existing bounds by preserving fine-grained column structures. Our results are verified on synthetic data simulated with controlled parameters.