🤖 AI Summary
To address high fine-tuning costs, neglect of model lineage, and cold-start challenges in LLM performance prediction, this paper proposes Lineage-Regularized Matrix Factorization (LRMF), the first framework to encode model lineage—i.e., parent-child derivation or fusion relationships—as a structural prior for training-free cross-model performance estimation. LRMF constructs a lineage graph from Hugging Face metadata and incorporates a graph Laplacian regularizer to capture multi-hop ancestral dependencies, unifying collaborative filtering with spectral graph theory. Evaluated on 2,934 publicly available models and over 21,000 benchmark instances, LRMF achieves 7–10 percentage points higher prediction correlation than state-of-the-art baselines. Crucially, it maintains high accuracy for zero-shot or few-shot novel models, substantially mitigating the cold-start problem.
📝 Abstract
Accurately forecasting the performance of Large Language Models (LLMs) before extensive fine-tuning or merging can substantially reduce both computational expense and development time. Although prior approaches like scaling laws account for global factors such as parameter size or training tokens, they often overlook explicit lineage relationships - i.e., which models are derived or merged from which parents. In this work, we propose a novel Lineage-Regularized Matrix Factorization (LRMF) framework that encodes ancestral ties among LLMs via a graph Laplacian regularizer. By leveraging multi-hop parent-child connections, LRMF consistently outperforms conventional matrix factorization and collaborative filtering methods in both instance-level and benchmark-level performance prediction. Our large-scale study includes 2,934 publicly available Hugging Face models and 21,000+ instances across 6 major benchmarks, showing that lineage constraints yield up to 7-10 percentage points higher correlation with actual performance compared to baselines. Moreover, LRMF effectively addresses the cold-start problem, providing accurate estimates for newly derived or merged models even with minimal data. This lineage-guided strategy thus offers a resource-efficient way to inform hyperparameter tuning, data selection, and model combination in modern LLM development.