🤖 AI Summary
Building “strong and specialized” narrow-domain AI systems faces two fundamental challenges: (i) acquiring narrow skills requires broad data distributions that implicitly provide hierarchical curricula, and (ii) domain-specific capabilities in large models exhibit non-locality in parameter space, hindering structural alignment during compression to smaller models. Method: The authors first systematically establish the necessity of broad data for narrow-skill training; they then propose a novel regularized guided pruning paradigm, employing skill-aligned loss functions to enhance both the prunability and retention rate of target skills. Contribution/Results: Experiments demonstrate that this pruning approach outperforms knowledge distillation in skill transfer. Further validation via synthetic tasks and skill localization analysis confirms the non-local parameterization of skills and underscores the critical role of curriculum learning induced by broad data.
📝 Abstract
We study the problem of creating strong, yet narrow, AI systems. While recent AI progress has been driven by the training of large general-purpose foundation models, the creation of smaller models specialized for narrow domains could be valuable for both efficiency and safety. In this work, we explore two challenges involved in creating such systems, having to do with basic properties of how neural networks learn and structure their representations. The first challenge regards when it is possible to train narrow models from scratch. Through experiments on a synthetic task, we find that it is sometimes necessary to train networks on a wide distribution of data to learn certain narrow skills within that distribution. This effect arises when skills depend on each other hierarchically, and training on a broad distribution introduces a curriculum which substantially accelerates learning. The second challenge regards how to transfer particular skills from large general models into small specialized models. We find that model skills are often not perfectly localized to a particular set of prunable components. However, we find that methods based on pruning can still outperform distillation. We investigate the use of a regularization objective to align desired skills with prunable components while unlearning unnecessary skills.