🤖 AI Summary
This paper investigates interaction effects and efficacy decay in data-driven algorithmic systems—such as language model classifiers and recommender systems—under coordinated interventions by multiple distinct user collectives. Addressing the limitation of prior work, which predominantly assumes a single homogeneous user group, we introduce the first theoretical framework for multi-collective algorithmic collective action. We design a controlled experimental platform to empirically analyze interventions via strategic data manipulation, systematic variation of collective size and homogeneity, and modeling of adversarial behaviors. Results show that parallel interventions by two collectives can reduce one collective’s intervention efficacy by up to 75%, confirming that unintended inter-collective interactions significantly degrade aggregate outcomes. In recommendation settings, collective size proves more decisive than homogeneity. The study uncovers emergent, non-linear dynamics under multi-collective coexistence and underscores the necessity of enhanced algorithmic transparency and user-level data agency—proposing a novel paradigm for algorithmic governance.
📝 Abstract
Given that data-dependent algorithmic systems have become impactful in more domains of life, the need for individuals to promote their own interests and hold algorithms accountable has grown. To have meaningful influence, individuals must band together to engage in collective action. Groups that engage in such algorithmic collective action are likely to vary in size, membership characteristics, and crucially, objectives. In this work, we introduce a first of a kind framework for studying collective action with two or more collectives that strategically behave to manipulate data-driven systems. With more than one collective acting on a system, unexpected interactions may occur. We use this framework to conduct experiments with language model-based classifiers and recommender systems where two collectives each attempt to achieve their own individual objectives. We examine how differing objectives, strategies, sizes, and homogeneity can impact a collective's efficacy. We find that the unintentional interactions between collectives can be quite significant; a collective acting in isolation may be able to achieve their objective (e.g., improve classification outcomes for themselves or promote a particular item), but when a second collective acts simultaneously, the efficacy of the first group drops by as much as $75%$. We find that, in the recommender system context, neither fully heterogeneous nor fully homogeneous collectives stand out as most efficacious and that heterogeneity's impact is secondary compared to collective size. Our results signal the need for more transparency in both the underlying algorithmic models and the different behaviors individuals or collectives may take on these systems. This approach also allows collectives to hold algorithmic system developers accountable and provides a framework for people to actively use their own data to promote their own interests.