๐ค AI Summary
To address the prohibitively high computational cost of computing log-likelihoods over large-scale text corpora in language model mapping, this paper proposes a variance-based importance re-sampling method leveraging cross-model log-likelihood variance. We introduce log-likelihood variance as a novel text discriminability metric, enabling adaptive subset selection that preserves fidelity in KL-divergence estimation. The method requires no additional training or annotationโonly lightweight statistical computations across a few baseline models. Experiments demonstrate that using merely ~50% of the original corpus achieves KL-estimation accuracy comparable to uniform sampling; furthermore, it substantially reduces the computational overhead for integrating new models into the map, facilitating efficient and scalable construction of language model embedding spaces. Our core contributions are: (i) a variance-driven importance metric grounded in statistical discriminability, and (ii) a KL-aware re-sampling paradigm that jointly optimizes estimation accuracy and computational efficiency.
๐ Abstract
We address the computational cost of constructing a model map, which embeds diverse language models into a common space for comparison via KL divergence. The map relies on log-likelihoods over a large text set, making the cost proportional to the number of texts. To reduce this cost, we propose a resampling method that selects important texts with weights proportional to the variance of log-likelihoods across models for each text. Our method significantly reduces the number of required texts while preserving the accuracy of KL divergence estimates. Experiments show that it achieves comparable performance to uniform sampling with about half as many texts, and also facilitates efficient incorporation of new models into an existing map. These results enable scalable and efficient construction of language model maps.