🤖 AI Summary
To address the need for enhancing content virality on social platforms, this paper proposes a personalized high-impact content generation method that explicitly incorporates social network structural information. The core method encodes graph-topological features—such as centrality and neighborhood connectivity—into prompts for large language models (e.g., LLaMA, Qwen), establishing a “content-centric, structure-aware” prompting paradigm to guide generation of posts more likely to trigger cascading diffusion. It integrates graph-structure encoding, multi-strategy prompt engineering, and propagation-based evaluation metrics (depth and breadth). Experiments across multiple LLMs demonstrate significant improvements in content dissemination potential: average retweet rate increases by 27.4%, and reach to critical nodes improves by 31.8%. The approach advances controllable, socially grounded text generation and is publicly released with open-source code.
📝 Abstract
The remarkable advancements in Large Language Models (LLMs) have revolutionized the content generation process in social media, offering significant convenience in writing tasks. However, existing applications, such as sentence completion and fluency enhancement, do not fully address the complex challenges in real-world social media contexts. A prevalent goal among social media users is to increase the visibility and influence of their posts. This paper, therefore, delves into the compelling question: Can LLMs generate personalized influential content to amplify a user's presence on social media? We begin by examining prevalent techniques in content generation to assess their impact on post influence. Acknowledging the critical impact of underlying network structures in social media, which are instrumental in initiating content cascades and highly related to the influence/popularity of a post, we then inject network information into prompt for content generation to boost the post's influence. We design multiple content-centric and structure-aware prompts. The empirical experiments across LLMs validate their ability in improving the influence and draw insights on which strategies are more effective. Our code is available at https://github.com/YuyingZhao/LLM-influence-amplifier.