Active Testing of Large Language Model via Multi-Stage Sampling

📅 2024-08-07
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Evaluating large language models (LLMs) is hindered by scarce high-quality test data, prohibitive annotation costs, and the incompatibility of existing active testing methods with LLMs’ task diversity and opaque internal mechanisms. Method: We propose AcTracer, the first multi-stage pool-based active testing framework for LLM evaluation. It uniquely integrates internal response characteristics—such as logit and attention uncertainty—with external task semantics—including prompt embeddings and cross-task similarity metrics—to jointly guide representative sample selection. Contribution/Results: Across diverse downstream tasks, AcTracer significantly reduces performance estimation variance, achieving up to 38.83% lower error than state-of-the-art baselines. With only 10% of the test set, it attains 98% of full-dataset evaluation accuracy—overcoming key applicability limitations of conventional active learning in LLM assessment.

Technology Category

Application Category

📝 Abstract
Performance evaluation plays a crucial role in the development life cycle of large language models (LLMs). It estimates the model's capability, elucidates behavior characteristics, and facilitates the identification of potential issues and limitations, thereby guiding further improvement. Given that LLMs' diverse task-handling abilities stem from large volumes of training data, a comprehensive evaluation also necessitates abundant, well-annotated, and representative test data to assess LLM performance across various downstream tasks. However, the demand for high-quality test data often entails substantial time, computational resources, and manual efforts, sometimes causing the evaluation to be inefficient or impractical. To address these challenges, researchers propose active testing, which estimates the overall performance by selecting a subset of test data. Nevertheless, the existing active testing methods tend to be inefficient, even inapplicable, given the unique new challenges of LLMs (e.g., diverse task types, increased model complexity, and unavailability of training data). To mitigate such limitations and expedite the development cycle of LLMs, in this work, we introduce AcTracer, an active testing framework tailored for LLMs that strategically selects a small subset of test data to achieve a nearly optimal performance estimation for LLMs. AcTracer utilizes both internal and external information from LLMs to guide the test sampling process, reducing variance through a multi-stage pool-based active selection. Our experiment results demonstrate that AcTracer achieves state-of-the-art performance compared to existing methods across various tasks, with up to 38.83% improvement over previous SOTA.
Problem

Research questions and friction points this paper is trying to address.

Evaluating large language models requires abundant test data
Existing active testing methods are inefficient for LLMs
AcTracer selects test subsets for accurate performance estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-stage pool-based active selection
Utilizes internal and external LLM information
Strategic subset selection for accurate estimation
🔎 Similar Papers
No similar papers found.