TBDFiltering: Sample-Efficient Tree-Based Data Filtering

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently evaluating the quality of massive, diverse text corpora for large language model training, where existing approaches struggle to scale. The authors propose a novel framework that combines hierarchical clustering based on text embeddings with adaptive sampling, eliminating the need for predefined cluster structures. By querying only a small number of samples—proportional to the size of the smallest pure leaf subtree—the method can, with high probability, accurately infer the overall data quality. Local quality assessments are performed using a large language model, substantially reducing the total number of required queries. Experimental results demonstrate that the proposed approach outperforms classifier-based baselines in filtering efficacy while significantly improving evaluation efficiency.

Technology Category

Application Category

📝 Abstract
The quality of machine learning models depends heavily on their training data. Selecting high-quality, diverse training sets for large language models (LLMs) is a difficult task, due to the lack of cheap and reliable quality metrics. While querying existing LLMs for document quality is common, this is not scalable to the large number (billions) of documents used in training. Instead, practitioners often use classifiers trained on sparse quality signals. In this paper, we propose a text-embedding-based hierarchical clustering approach that adaptively selects the documents to be evaluated by the LLM to estimate cluster quality. We prove that our method is query efficient: under the assumption that the hierarchical clustering contains a subtree such that each leaf cluster in the tree is pure enough (i.e., it mostly contains either only good or only bad documents), with high probability, the method can correctly predict the quality of each document after querying a small number of documents. The number of such documents is proportional to the size of the smallest subtree with (almost) pure leaves, without the algorithm knowing this subtree in advance. Furthermore, in a comprehensive experimental study, we demonstrate the benefits of our algorithm compared to other classifier-based filtering methods.
Problem

Research questions and friction points this paper is trying to address.

data filtering
large language models
training data quality
sample efficiency
document selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

hierarchical clustering
sample-efficient filtering
LLM-based quality estimation
adaptive document selection
tree-based data filtering
🔎 Similar Papers
No similar papers found.