π€ AI Summary
This paper addresses the risk of negative transfer in supervised transfer learning arising from unknown source-data quality. We propose an adaptive alternating-sampling stochastic gradient descent (SGD) algorithm that requires no prior knowledge. Through a dynamic subsampling mechanism, it automatically adjusts the sampling ratio between source and target domains: increasing source-domain utilization when source data is informative, and shifting toward target-domain samples to mitigate negative transfer when source data quality is poor. Innovatively, we integrate mixed-sampling SGD with a sequence of constrained convex optimization problems, yielding an end-to-end analyzable transfer learning framework. We theoretically establish a convergence rate of $O(1/sqrt{T})$. Extensive experiments on both synthetic and real-world datasets demonstrate the methodβs effectiveness and robustness against heterogeneous or low-quality source data.
π Abstract
Theoretical works on supervised transfer learning (STL) -- where the learner has access to labeled samples from both source and target distributions -- have for the most part focused on statistical aspects of the problem, while efficient optimization has received less attention. We consider the problem of designing an SGD procedure for STL that alternates sampling between source and target data, while maintaining statistical transfer guarantees without prior knowledge of the quality of the source data. A main algorithmic difficulty is in understanding how to design such an adaptive sub-sampling mechanism at each SGD step, to automatically gain from the source when it is informative, or bias towards the target and avoid negative transfer when the source is less informative.
We show that, such a mixed-sample SGD procedure is feasible for general prediction tasks with convex losses, rooted in tracking an abstract sequence of constrained convex programs that serve to maintain the desired transfer guarantees.
We instantiate these results in the concrete setting of linear regression with square loss, and show that the procedure converges, with $1/sqrt{T}$ rate, to a solution whose statistical performance on the target is adaptive to the a priori unknown quality of the source. Experiments with synthetic and real datasets support the theory.