Disrupting Networks: Amplifying Social Dissensus via Opinion Perturbation and Large Language Models

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates human-driven destabilization mechanisms in social networks, focusing on opinion polarization induced by adversarial content injection. Method: We extend the Friedkin-Johnsen model to capture how malleable initial opinions amplify disagreement, and integrate graph-theoretic analysis with agent-based social simulation to develop the first LLM-based optimization framework for social interference. Specifically, we employ reinforcement learning to fine-tune large language models (LLMs) to generate targeted content that precisely perturbs individual beliefs. Contribution/Results: Experiments on both synthetic and real-world social network datasets demonstrate that our framework significantly increases group-level opinion divergence, approaching the theoretical maximum perturbation bound. Beyond exposing generative AI’s latent risks in information warfare, this work establishes— for the first time—the closed-loop paradigm of “model-driven social perturbation coupled with controllable LLM generation,” offering a novel foundation for robustness evaluation and defense of socio-technical systems.

Technology Category

Application Category

📝 Abstract
We study how targeted content injection can strategically disrupt social networks. Using the Friedkin-Johnsen (FJ) model, we utilize a measure of social dissensus and show that (i) simple FJ variants cannot significantly perturb the network, (ii) extending the model enables valid graph structures where disruption at equilibrium exceeds the initial state, and (iii) altering an individual's inherent opinion can maximize disruption. Building on these insights, we design a reinforcement learning framework to fine-tune a Large Language Model (LLM) for generating disruption-oriented text. Experiments on synthetic and real-world data confirm that tuned LLMs can approach theoretical disruption limits. Our findings raise important considerations for content moderation, adversarial information campaigns, and generative model regulation.
Problem

Research questions and friction points this paper is trying to address.

Targeted content injection disrupts social networks strategically
Extending opinion models enables significant network disruption
Fine-tuning LLMs generates text approaching theoretical disruption limits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Targeted content injection disrupts social networks
Reinforcement learning fine-tunes LLMs for disruption
Altering individual opinions maximizes network dissensus
🔎 Similar Papers
No similar papers found.