Measuring Scalar Constructs in Social Science with LLMs

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of inconsistent numerical outputs when using large language models (LLMs) to quantify scalar constructs with continuous semantic structure—such as linguistic complexity or affective intensity—in social science research. We systematically evaluate four approaches—direct pointwise scoring, pairwise comparison aggregation, token-probability-weighted scoring, and supervised fine-tuning—on a multi-source political science dataset. Results demonstrate that token-probability-weighted pointwise scoring substantially improves measurement continuity and accuracy. Moreover, fine-tuning small-scale LLMs on merely ~1,000 annotated instances achieves performance comparable to—or even exceeding—that of large LLMs under zero-shot prompting. These findings establish a more robust, efficient, and scalable paradigm for automated construct measurement in social science, enabling precise, interpretable, and generalizable quantification of latent semantic dimensions without reliance on extensive human annotation or computationally prohibitive model sizes.

Technology Category

Application Category

📝 Abstract
Many constructs that characterize language, like its complexity or emotionality, have a naturally continuous semantic structure; a public speech is not just "simple" or "complex," but exists on a continuum between extremes. Although large language models (LLMs) are an attractive tool for measuring scalar constructs, their idiosyncratic treatment of numerical outputs raises questions of how to best apply them. We address these questions with a comprehensive evaluation of LLM-based approaches to scalar construct measurement in social science. Using multiple datasets sourced from the political science literature, we evaluate four approaches: unweighted direct pointwise scoring, aggregation of pairwise comparisons, token-probability-weighted pointwise scoring, and finetuning. Our study yields actionable findings for applied researchers. First, LLMs prompted to generate pointwise scores directly from texts produce discontinuous distributions with bunching at arbitrary numbers. The quality of the measurements improves with pairwise comparisons made by LLMs, but it improves even more by taking pointwise scores and weighting them by token probability. Finally, finetuning smaller models with as few as 1,000 training pairs can match or exceed the performance of prompted LLMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM methods for measuring continuous social science constructs
Addressing numerical output issues in LLM-based scalar measurement
Comparing scoring approaches for language complexity and emotionality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pairwise comparisons improve measurement quality
Token-probability weighting enhances pointwise scoring accuracy
Finetuning small models matches large LLM performance
🔎 Similar Papers
No similar papers found.