Cheap Learning: Maximising Performance of Language Models for Social Data Science Using Minimal Data

📅 2024-01-22
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of labeled data in social science research—which hinders the application of language models—this study systematically evaluates three low-resource learning paradigms: weak supervision (Snorkel), transfer learning (fine-tuning BERT and LLMs), and prompt engineering (including zero-shot prompting). We conduct unified, controlled experiments across six real-world social science tasks. This work introduces the first standardized evaluation framework tailored to the social science domain. Results reveal that zero-shot large language model prompting achieves performance comparable to supervised baselines—without any labeled data or training overhead—thereby substantially lowering the barrier to practical deployment. All three paradigms demonstrate robustness, with prompt engineering proving especially advantageous for rapid, lightweight deployment. To ensure reproducibility and broad adoption, we open-source a fully documented, experimentally validated codebase. This work establishes a new, lightweight, efficient, and plug-and-play paradigm for linguistic analysis in social science empirical research.

Technology Category

Application Category

📝 Abstract
The field of machine learning has recently made significant progress in reducing the requirements for labelled training data when building new models. These `cheaper' learning techniques hold significant potential for the social sciences, where development of large labelled training datasets is often a significant practical impediment to the use of machine learning for analytical tasks. In this article we review three `cheap' techniques that have developed in recent years: weak supervision, transfer learning and prompt engineering. For the latter, we also review the particular case of zero-shot prompting of large language models. For each technique we provide a guide of how it works and demonstrate its application across six different realistic social science applications (two different tasks paired with three different dataset makeups). We show good performance for all techniques, and in particular we demonstrate how prompting of large language models can achieve high accuracy at very low cost. Our results are accompanied by a code repository to make it easy for others to duplicate our work and use it in their own research. Overall, our article is intended to stimulate further uptake of these techniques in the social sciences.
Problem

Research questions and friction points this paper is trying to address.

Minimizing labeled data needs for social science ML tasks
Evaluating weak supervision, transfer learning, prompt engineering techniques
Demonstrating cost-effective high accuracy via LLM zero-shot prompting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weak supervision reduces labeled data needs
Transfer learning leverages pre-trained models
Prompt engineering enables zero-shot learning
L
Leonardo Castro-Gonzalez
Public Policy Programme, The Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK.
Y
Yi-ling Chung
Public Policy Programme, The Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK.
H
Hannak Rose Kirk
Public Policy Programme, The Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK.; Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
John Francis
John Francis
Public Policy Programme, The Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK.
A
Angus R. Williams
Public Policy Programme, The Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK.
Pica Johansson
Pica Johansson
Public Policy Programme, The Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK.
Jonathan Bright
Jonathan Bright
CTO at pattrn.ai
AIAI safetyOnline safetyAI for government