Scaling Up Active Testing to Large Language Models

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Active testing for large language models (LLMs) suffers from high computational overhead and low label efficiency. Method: This paper proposes a lightweight, efficient active testing framework. Its core innovations are: (1) leveraging in-context learning to construct a zero-shot surrogate model—requiring no training or parameter updates—and eliminating dependence on target LLM predictions; and (2) designing a single-pass error estimator that dynamically quantifies current evaluation confidence and guides optimal sample selection. Contribution/Results: The method substantially reduces both data requirements and computational cost. On multiple benchmark tasks, it achieves evaluation accuracy comparable to—or exceeding—that of conventional active testing, using significantly fewer labeled samples. It is the first work to systematically demonstrate the feasibility, scalability, and practicality of active testing for large-scale language model evaluation.

Technology Category

Application Category

📝 Abstract
Active testing enables label-efficient evaluation of models through careful data acquisition. However, its significant computational costs have previously undermined its use for large models. We show how it can be successfully scaled up to the evaluation of large language models (LLMs). In particular we show that the surrogate model used to guide data acquisition can be constructed cheaply using in-context learning, does not require updating within an active-testing loop, and can be smaller than the target model. We even find we can make good data-acquisition decisions without computing predictions with the target model and further introduce a single-run error estimator to asses how well active testing is working on the fly. We find that our approach is able to more effectively evaluate LLM performance with less data than current standard practices.
Problem

Research questions and friction points this paper is trying to address.

Scaling active testing for large language models efficiently
Reducing computational costs in model evaluation
Improving label efficiency in LLM performance assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses in-context learning for surrogate model
Eliminates target model predictions in acquisition
Introduces single-run error estimator