Engagement-Driven Content Generation with Large Language Models

๐Ÿ“… 2024-11-20
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing research fails to systematically address how large language models (LLMs) can generate socially engaging content in dynamic social networksโ€”where user connectivity and opinion evolution introduce unique challenges for producing content that is topically coherent, controllably fluent, and participation-maximizing. Method: We propose the first reinforcement learning framework that tightly couples LLM-based content generation with opinion dynamics modeling in social networks. It integrates Proximal Policy Optimization (PPO), opinion dynamics simulation, LLM fine-tuning, constrained decoding, and a plug-and-play, adaptive engagement feedback module. Contribution/Results: Experiments demonstrate significant improvements in user engagement metrics; the framework dynamically adapts its output to evolving network-level opinion distributions, consistently outperforming baselines across diverse scenarios. Our approach advances both theoretical understanding of socio-linguistic alignment in generative models and practical scalability for real-world social media applications.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) exhibit significant persuasion capabilities in one-on-one interactions, but their influence within social networks remains underexplored. This study investigates the potential social impact of LLMs in these environments, where interconnected users and complex opinion dynamics pose unique challenges. In particular, we address the following research question: can LLMs learn to generate meaningful content that maximizes user engagement on social networks? To answer this question, we define a pipeline to guide the LLM-based content generation which employs reinforcement learning with simulated feedback. In our framework, the reward is based on an engagement model borrowed from the literature on opinion dynamics and information propagation. Moreover, we force the text generated by the LLM to be aligned with a given topic and to satisfy a minimum fluency requirement. Using our framework, we analyze the capabilities and limitations of LLMs in tackling the given task, specifically considering the relative positions of the LLM as an agent within the social network and the distribution of opinions in the network on the given topic. Our findings show the full potential of LLMs in creating social engagement. Notable properties of our approach are that the learning procedure is adaptive to the opinion distribution of the underlying network and agnostic to the specifics of the engagement model, which is embedded as a plug-and-play component. In this regard, our approach can be easily refined for more complex engagement tasks and interventions in computational social science. The code used for the experiments is publicly available at https://anonymous.4open.science/r/EDCG/.
Problem

Research questions and friction points this paper is trying to address.

Can LLMs maximize user engagement on social networks
Using reinforcement learning to simulate network feedback
Analyzing LLM performance in diverse engagement conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning with simulated feedback
Adaptive to network opinion distribution
Plug-and-play engagement model component
๐Ÿ”Ž Similar Papers
No similar papers found.