🤖 AI Summary
To address the scarcity of labeled data in social science research—which hinders the application of language models—this study systematically evaluates three low-resource learning paradigms: weak supervision (Snorkel), transfer learning (fine-tuning BERT and LLMs), and prompt engineering (including zero-shot prompting). We conduct unified, controlled experiments across six real-world social science tasks. This work introduces the first standardized evaluation framework tailored to the social science domain. Results reveal that zero-shot large language model prompting achieves performance comparable to supervised baselines—without any labeled data or training overhead—thereby substantially lowering the barrier to practical deployment. All three paradigms demonstrate robustness, with prompt engineering proving especially advantageous for rapid, lightweight deployment. To ensure reproducibility and broad adoption, we open-source a fully documented, experimentally validated codebase. This work establishes a new, lightweight, efficient, and plug-and-play paradigm for linguistic analysis in social science empirical research.
📝 Abstract
The field of machine learning has recently made significant progress in reducing the requirements for labelled training data when building new models. These `cheaper' learning techniques hold significant potential for the social sciences, where development of large labelled training datasets is often a significant practical impediment to the use of machine learning for analytical tasks. In this article we review three `cheap' techniques that have developed in recent years: weak supervision, transfer learning and prompt engineering. For the latter, we also review the particular case of zero-shot prompting of large language models. For each technique we provide a guide of how it works and demonstrate its application across six different realistic social science applications (two different tasks paired with three different dataset makeups). We show good performance for all techniques, and in particular we demonstrate how prompting of large language models can achieve high accuracy at very low cost. Our results are accompanied by a code repository to make it easy for others to duplicate our work and use it in their own research. Overall, our article is intended to stimulate further uptake of these techniques in the social sciences.