🤖 AI Summary
In management research, unstructured text annotation has long relied on crowdsourced human labor, while large language models (LLMs) offer efficiency and cost advantages but lack a systematic, reproducible framework for evaluating their applicability. Method: We propose SILICON, an LLM-powered text annotation workflow tailored to management research, integrating structured annotation guideline design, expert-derived baseline construction, iterative prompt optimization, and multi-model cross-validation—introducing, for the first time, a regression-based method for comparing LLM outputs. Contribution/Results: Validated via Krippendorff’s α reliability analysis and seven empirical case studies, SILICON demonstrates high agreement between LLM and expert annotations in single-label tasks (α > 0.8), but markedly reduced consistency in multi-label classification. Results confirm that expert baselines outperform crowdsourced annotations and that multi-model evaluation is indispensable. We publicly release a comprehensive practice guide and end-to-end implementation code, addressing a critical methodological gap in LLM-assisted qualitative research.
📝 Abstract
Unstructured text data annotation and analysis are fundamental to management research, often relying on human annotators through crowdsourcing platforms. While Large Language Models (LLMs) promise to provide a cost-effective and efficient alternative to human annotation, there lacks a systematic workflow that evaluate when LLMs are suitable or how to proceed with LLM-based text annotation in a reproducible manner. This paper addresses this methodological gap by introducing the ``SILICON"(Systematic Inference with LLMs for Information Classification and Notation) workflow. The workflow integrates established principles of human annotation with systematic prompt optimization and model selection, addressing challenges such as developing robust annotation guidelines, establishing high-quality human baselines, optimizing prompts, and ensuring reproducibility across LLMs. We validate the SILICON workflow through seven case studies covering common management research tasks. Our findings highlight the importance of validating annotation guideline agreement, the superiority of expert-developed human baselines over crowdsourced ones, the iterative nature of prompt optimization, and the necessity of testing multiple LLMs. We also find that LLMs agree well with expert annotations in most cases but show low agreement in more complex multi-label classification tasks. Notably, we propose a regression-based methodology to empirically compare LLM outputs across prompts and models. Our workflow advances management research by establishing rigorous, transparent, and reproducible processes for LLM-based annotation. We provide practical guidance for researchers to effectively navigate the evolving landscape of generative AI tools.