Can LLMs Simulate Social Media Engagement? A Study on Action-Guided Response Generation

📅 2025-02-17
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the capability of large language models (LLMs) to simulate social media user engagement behaviors—specifically, predicting users’ typical actions (e.g., retweeting, quoting, paraphrasing) toward trending posts and generating personalized responses—while evaluating alignment with real human behavior. To this end, we propose an “action-guided response generation” framework that decouples action prediction from conditional response generation for the first time. Experiments reveal that LLMs underperform BERT significantly in zero-shot action prediction—highlighting a fundamental limitation in action reasoning—yet achieve substantially higher semantic similarity in few-shot fine-tuned response generation compared to baselines. Our findings expose a duality in LLMs’ behavioral simulation capacity: strong semantic generation but weak action modeling. This challenges the validity of end-to-end agent paradigms in social behavior modeling and introduces a novel methodological pathway grounded in modular, action-aware response generation.

Technology Category

Application Category

📝 Abstract
Social media enables dynamic user engagement with trending topics, and recent research has explored the potential of large language models (LLMs) for response generation. While some studies investigate LLMs as agents for simulating user behavior on social media, their focus remains on practical viability and scalability rather than a deeper understanding of how well LLM aligns with human behavior. This paper analyzes LLMs' ability to simulate social media engagement through action guided response generation, where a model first predicts a user's most likely engagement action-retweet, quote, or rewrite-towards a trending post before generating a personalized response conditioned on the predicted action. We benchmark GPT-4o-mini, O1-mini, and DeepSeek-R1 in social media engagement simulation regarding a major societal event discussed on X. Our findings reveal that zero-shot LLMs underperform BERT in action prediction, while few-shot prompting initially degrades the prediction accuracy of LLMs with limited examples. However, in response generation, few-shot LLMs achieve stronger semantic alignment with ground truth posts.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLMs in simulating social media engagement.
Assess LLMs' action prediction and response generation.
Compare LLM performance with BERT benchmarks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Action-guided response generation
Benchmarking LLMs for engagement
Few-shot prompting improves alignment
🔎 Similar Papers
No similar papers found.
Z
Zhongyi Qiu
School of Computational Science and Engineering, Georgia Institute of Technology
Hanjia Lyu
Hanjia Lyu
University of Rochester
AI and SocietyMultimodal LLMsGraph LearningComputational Social ScienceHealth Informatics
W
Wei Xiong
Department of Computer Science, University of Rochester
J
Jiebo Luo
Department of Computer Science, University of Rochester