Algorithmic Mirror: Designing an Interactive Tool to Promote Self-Reflection for YouTube Recommendations

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem that recommender algorithms implicitly shape users’ digital identities and erode their autonomy. To mitigate this, we propose an interactive reflective tool designed to enhance algorithmic literacy. Methodologically, we introduce the novel “hypothetical inference” paradigm: leveraging large language models, the tool reconstructs platform algorithms’ semantic inferences from users’ fragmented behavioral data, generating interpretable, personalized explanations; we further pioneer the integration of temporal evolution visualization (i.e., time-series graphs) into algorithmic literacy tool design. Evaluated through a qualitative 14-participant human–computer interaction study, the tool significantly improves users’ critical awareness of algorithmic systems and their capacity for self-explanation. Our contribution lies in advancing explainable AI toward end-user operability—demonstrating a novel, black-box-agnostic approach that empowers users to reflect upon and autonomously regulate algorithmic influences without requiring access to proprietary algorithmic internals.

Technology Category

Application Category

📝 Abstract
Big Data analytics and Artificial Intelligence systems derive non-intuitive and often unverifiable inferences about individuals' behaviors, preferences, and private lives. Drawing on diverse, feature-rich datasets of unpredictable value, these systems erode the intuitive connection between our actions and how we are perceived, diminishing control over our digital identities. While Explainable Artificial Intelligence scholars have attempted to explain the inner workings of algorithms, their visualizations frequently overwhelm end-users with complexity. This research introduces 'hypothetical inference', a novel approach that uses language models to simulate how algorithms might interpret users' digital footprints and infer personal characteristics without requiring access to proprietary platform algorithms. Through empirical studies with fourteen adult participants, we identified three key design opportunities to foster critical algorithmic literacy: (1) reassembling scattered digital footprints into a unified map, (2) simulating algorithmic inference through LLM-generated interpretations, and (3) incorporating temporal dimensions to visualize evolving patterns. This research lays the groundwork for tools that can help users recognize the influence of data on platforms and develop greater autonomy in increasingly algorithm-mediated digital environments.
Problem

Research questions and friction points this paper is trying to address.

Understanding how algorithms infer personal traits from digital footprints
Reducing complexity in explaining algorithmic decisions to users
Enhancing user control over digital identities in AI-driven platforms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses language models for hypothetical algorithmic inference
Reassembles digital footprints into unified visual maps
Simulates temporal patterns in algorithmic interpretations
🔎 Similar Papers
No similar papers found.
Y
Yui Kondo
Oxford Internet Institute, University of Oxford, UK
Q
Qing Xiao
Human-Computer Interaction Institute, Carnegie Mellon University, USA
J
Jun Zhao
Department of Computer Science, University of Oxford, UK
Luc Rocher
Luc Rocher
Associate Professor, University of Oxford
PrivacyAlgorithm AuditingAlgorithmic FairnessMachine Learning