π€ AI Summary
In empirical software engineering, LLM-driven annotation tasks (e.g., commit/issue labeling) lack standardized evaluation of reliability, calibration, and drift monitoring, while critical configuration details are frequently omitted. Method: We propose OLAFβan operationalized framework that formally defines LLM-based annotation as a measurable process. Grounded in conceptual modeling and classical measurement theory, and informed by human-AI collaborative annotation paradigms, OLAF systematically models six core dimensions: reliability, calibration, drift, consensus, aggregation, and transparency. Contribution/Results: OLAF unifies key constructs and their interrelations, enabling reproducible evaluation standards and mandating comprehensive configuration reporting. It significantly enhances methodological rigor and cross-study comparability in LLM annotation research, establishing the first measurement benchmark for LLM-based annotation tailored to empirical software engineering.
π Abstract
Large Language Models (LLMs) are increasingly used in empirical software engineering (ESE) to automate or assist annotation tasks such as labeling commits, issues, and qualitative artifacts. Yet the reliability and reproducibility of such annotations remain underexplored. Existing studies often lack standardized measures for reliability, calibration, and drift, and frequently omit essential configuration details. We argue that LLM-based annotation should be treated as a measurement process rather than a purely automated activity. In this position paper, we outline the extbf{Operationalization for LLM-based Annotation Framework (OLAF)}, a conceptual framework that organizes key constructs: extit{reliability, calibration, drift, consensus, aggregation}, and extit{transparency}. The paper aims to motivate methodological discussion and future empirical work toward more transparent and reproducible LLM-based annotation in software engineering research.