🤖 AI Summary
Large language models (LLMs) often rely on heterogeneous datasets for transfer learning to unseen tasks, yet conventional methods lack reliable cross-task performance predictability. Method: We propose an interpretable transfer learning framework grounded in transfer matrices and dimensionality reduction, systematically modeling transfer dynamics across 20+ tasks for 10 LLMs via principal component analysis and multi-task fine-tuning experiments. Contribution/Results: We find that transfer efficacy is primarily governed by latent statistical properties of source datasets—including class distribution bias, generative length preference, and linguistic style—rather than superficial task similarity or annotation quality, challenging prevailing interpretability paradigms. The study identifies key latent variables that substantially improve predictability and controllability of LLM adaptation to novel tasks. This work provides both theoretical foundations and practical pathways for robust, open-scenario adaptation of foundation models.
📝 Abstract
Large language models are increasingly deployed across diverse applications. This often includes tasks LLMs have not encountered during training. This implies that enumerating and obtaining the high-quality training data for all tasks is infeasible. Thus, we often need to rely on transfer learning using datasets with different characteristics, and anticipate out-of-distribution requests. Motivated by this practical need, we propose an analysis framework, building a transfer learning matrix and dimensionality reduction, to dissect these cross-task interactions. We train and analyze 10 models to identify latent abilities (e.g., Reasoning, Sentiment Classification, NLU, Arithmetic) and discover the side effects of the transfer learning. Our findings reveal that performance improvements often defy explanations based on surface-level dataset similarity or source data quality. Instead, hidden statistical factors of the source dataset, such as class distribution and generation length proclivities, alongside specific linguistic features, are actually more influential. This work offers insights into the complex dynamics of transfer learning, paving the way for more predictable and effective LLM adaptation.