🤖 AI Summary
This study systematically evaluates methods for inferring sociodemographic attributes (age, gender, political affiliation) of Reddit users. We construct a large-scale, self-reported dataset comprising over 850,000 labeled posts. We benchmark embedding-based models against probabilistic models on two tasks: attribute classification (measured by ROC AUC) and population-level proportion estimation (measured by MAE). Results show that a Naïve Bayes classifier with bag-of-words features significantly outperforms state-of-the-art embedding approaches—achieving up to a 19% absolute improvement in classification AUC and consistently sub-15% MAE for proportion estimation under large-sample conditions. We propose the CSS (Coverage, Soundness, Scalability) framework—a best-practice guideline for computational social science research—emphasizing interpretable modeling and rigorous evaluation. All code, data, and model weights are publicly released, establishing a reproducible, interpretable, and robust methodological benchmark for bridging online behavior with offline demographic characteristics.
📝 Abstract
Inference of sociodemographic attributes of social media users is an essential step for computational social science (CSS) research to link online and offline behavior. However, there is a lack of a systematic evaluation and clear guidelines for optimal methodologies for this task on Reddit, one of today's largest social media. In this study, we fill this gap by comparing state-of-the-art (SOTA) and probabilistic models. To this end, first we collect a novel data set of more than 850k self-declarations on age, gender, and partisan affiliation from Reddit comments. Then, we systematically compare alternatives to the widely used embedding-based model and labeling techniques for the definition of the ground-truth. We do so on two tasks: ($i$) predicting binary labels (classification); and ($ii$)~predicting the prevalence of a demographic class among a set of users (quantification). Our findings reveal that Naive Bayes models not only offer transparency and interpretability by design but also consistently outperform the SOTA. Specifically, they achieve an improvement in ROC AUC of up to $19%$ and maintain a mean absolute error (MAE) below $15%$ in quantification for large-scale data settings. Finally, we discuss best practices for researchers in CSS, emphasizing coverage, interpretability, reliability, and scalability. The code and model weights used for the experiments are publicly available.footnote{https://anonymous.4open.science/r/SDI-submission-5234}