Behavior Change as a Signal for Identifying Social Media Manipulation

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of detecting automated or coordinated inauthentic accounts on social media that dynamically adapt their strategies to evade detection. The authors propose a novel paradigm centered on the distribution of behavioral changes: user activity sequences are first encoded into symbolic strings using BLOC, then segmented to model transitions between adjacent behavioral segments, yielding evolution-aware feature vectors that capture temporal dynamics. A supervised classifier is trained on these representations to identify manipulative behavior. Crucially, this study is the first to systematically treat behavioral change distributions as a primary signal, revealing that genuine accounts exhibit stable behavioral shifts, whereas manipulative accounts display either extreme or highly uniform abrupt changes. Experiments demonstrate state-of-the-art performance in both social bot and coordinated inauthentic behavior detection, confirming the discriminative power and effectiveness of behavioral change signals.

Technology Category

Application Category

📝 Abstract
Social media accounts engaging in online manipulation can change their behaviors for re-purposing or to evade detection. Existing detection systems are built on features that do not exploit such behavioral patterns. Here we investigate the degree to which change in behavior can serve as a signal for identifying automated or coordinated accounts. First, we use Behavioral Languages for Online Characterization (BLOC) to represent the behavior of a social media account as a sequence of symbols that represent the account's actions and content. Second, we segment an account's BLOC strings and measure the changes between consecutive segments. Third, we represent an account as a feature vector that captures the distribution of behavioral change values. Finally, the resulting features are used to train and test supervised classifiers. We apply the proposed method to two detection tasks aimed at automated behavior (social bots) and coordinated inauthentic behavior (information operations). Our results reveal that the distributions of behavioral changes tend to be consistent across authentic accounts, while social bots exhibit either very low or very high behavioral change. Coordinated inauthentic accounts exhibit highly similar distributions of behavioral change within the same campaign, but diverse across campaigns. These patterns allow our classifiers to achieve good accuracy in both tasks, demonstrating the effectiveness of behavioral change as a signal for identifying online manipulation.
Problem

Research questions and friction points this paper is trying to address.

social media manipulation
behavior change
social bots
coordinated inauthentic behavior
detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

behavioral change
social media manipulation
BLOC
coordinated inauthentic behavior
social bots
🔎 Similar Papers
No similar papers found.
I
Isuru Ariyarathne
Department of Data Science, William & Mary, Williamsburg, Virginia, USA
G
Gangani Ariyarathne
Department of Data Science, William & Mary, Williamsburg, Virginia, USA
Alessandro Flammini
Alessandro Flammini
Indiana University - School of Informatics, Computing and Engineering
computational social sciencenetwork sciencedata science
Filippo Menczer
Filippo Menczer
Luddy Distinguished Professor of Informatics and Computer Science, Indiana University
MisinformationWeb ScienceNetwork ScienceComputational Social ScienceSocial Media
A
Alexander C. Nwala
Department of Data Science, William & Mary, Williamsburg, Virginia, USA