More Human, More Efficient: Aligning Annotations with Quantized SLMs

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses systematic biases, low agreement with human experts, irreproducibility, and data privacy risks inherent in large language models for automated annotation by proposing a highly aligned, deterministic, and open-source labeling framework based on a 4-bit quantized 1.7B-parameter small language model. Through task-aligned fine-tuning, a multidimensional scoring mechanism, and strategies involving data augmentation and regularization, the framework achieves substantially improved annotation quality with only limited human-annotated data. Experimental results demonstrate that the method outperforms the current best closed-source large models by 0.23 in Krippendorff’s α agreement metric and exhibits strong generalization across tasks such as sentiment classification. Notably, this study provides the first evidence that a quantized small model, when properly aligned via fine-tuning, can surpass state-of-the-art closed-source models in annotation consistency.
📝 Abstract
As Large Language Model (LLM) capabilities advance, the demand for high-quality annotation of exponentially increasing text corpora has outpaced human capacity, leading to the widespread adoption of LLMs in automatic evaluation and annotation. However, proprietary LLMs often exhibit systematic biases that diverge from human expert consensus, lacks reproducibility, and raises data privacy concerns. Our work examines the viability of finetuning a quantized Small Language Model of 1.7B parameter size on limited human-annotated data to serve as a highly aligned, deterministic evaluator and annotator. By implementing a custom, multi-dimensional rubric framework and simple augmentation and regularization techniques, the proposed approach achieves higher inter-annotator agreement (0.23 points increase in Krippendorff's $α$) than the best performing state-of-the-art proprietary LLM. We also demonstrate the generalizability of the proposed training pipeline on a separate emotion classification task. The results show that task-specific alignment and efficient 4-bit quantized fine-tuning provide superior open-source alternative to using proprietary models for evaluation and annotation. Our finetuning approach is publicly available at https://github.com/jylee-k/slm-judge.
Problem

Research questions and friction points this paper is trying to address.

annotation alignment
systematic bias
data privacy
reproducibility
human-LLM disagreement
Innovation

Methods, ideas, or system contributions that make the work stand out.

quantized small language model
human-aligned annotation
deterministic evaluator
Krippendorff's alpha
efficient fine-tuning
🔎 Similar Papers
No similar papers found.