🤖 AI Summary
This study addresses the frequent neglect of potential psychological harms in social media algorithmic recommendation systems, which often leads to a disconnection between users’ emotional experiences and platform feedback. Through participatory design workshops with 21 individuals diagnosed with mental health conditions, complemented by qualitative interviews and thematic analysis, the research introduces the novel concept of “entanglement” to describe the affective misalignment between user agency and algorithmic outcomes. The authors translate user-generated folk theories into actionable design principles, advocating for contextualized interactions and the restoration of explicit user control to mitigate entanglement effects. This work offers both a theoretical framework and concrete design strategies for developing recommendation systems that actively support psychological well-being.
📝 Abstract
Social media platforms have rapidly adopted algorithmic curation with little consideration for the potential harm to users' mental well-being. We present findings from design workshops with 21 participants diagnosed with mental illness about their interactions with social media platforms. We find that users develop cause-and-effect explanations, or folk theories, to understand their experiences with algorithmic curation. These folk theories highlight a breakdown in algorithmic design that we explain using the framework of entanglement, a phenomenon where there is a disconnect between users' actions and platform outcomes on an emotional level. Participants' designs to address entanglement and mitigate harms centered on contextualizing their engagement and restoring explicit user control on social media. The conceptualization of entanglement and the resulting design recommendations have implications for social computing and recommender systems research, particularly in evaluating and designing social media platforms that support users' mental well-being.