Likelihood Variance as Text Importance for Resampling Texts to Map Language Models

๐Ÿ“… 2025-05-21
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the prohibitively high computational cost of computing log-likelihoods over large-scale text corpora in language model mapping, this paper proposes a variance-based importance re-sampling method leveraging cross-model log-likelihood variance. We introduce log-likelihood variance as a novel text discriminability metric, enabling adaptive subset selection that preserves fidelity in KL-divergence estimation. The method requires no additional training or annotationโ€”only lightweight statistical computations across a few baseline models. Experiments demonstrate that using merely ~50% of the original corpus achieves KL-estimation accuracy comparable to uniform sampling; furthermore, it substantially reduces the computational overhead for integrating new models into the map, facilitating efficient and scalable construction of language model embedding spaces. Our core contributions are: (i) a variance-driven importance metric grounded in statistical discriminability, and (ii) a KL-aware re-sampling paradigm that jointly optimizes estimation accuracy and computational efficiency.

Technology Category

Application Category

๐Ÿ“ Abstract
We address the computational cost of constructing a model map, which embeds diverse language models into a common space for comparison via KL divergence. The map relies on log-likelihoods over a large text set, making the cost proportional to the number of texts. To reduce this cost, we propose a resampling method that selects important texts with weights proportional to the variance of log-likelihoods across models for each text. Our method significantly reduces the number of required texts while preserving the accuracy of KL divergence estimates. Experiments show that it achieves comparable performance to uniform sampling with about half as many texts, and also facilitates efficient incorporation of new models into an existing map. These results enable scalable and efficient construction of language model maps.
Problem

Research questions and friction points this paper is trying to address.

Reduce computational cost of model map construction
Select important texts via likelihood variance resampling
Maintain KL divergence accuracy with fewer texts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Resampling texts by likelihood variance importance
Reduces required texts while preserving accuracy
Efficiently incorporates new models into map
๐Ÿ”Ž Similar Papers
No similar papers found.