🤖 AI Summary
This work addresses the problem that recommender algorithms implicitly shape users’ digital identities and erode their autonomy. To mitigate this, we propose an interactive reflective tool designed to enhance algorithmic literacy. Methodologically, we introduce the novel “hypothetical inference” paradigm: leveraging large language models, the tool reconstructs platform algorithms’ semantic inferences from users’ fragmented behavioral data, generating interpretable, personalized explanations; we further pioneer the integration of temporal evolution visualization (i.e., time-series graphs) into algorithmic literacy tool design. Evaluated through a qualitative 14-participant human–computer interaction study, the tool significantly improves users’ critical awareness of algorithmic systems and their capacity for self-explanation. Our contribution lies in advancing explainable AI toward end-user operability—demonstrating a novel, black-box-agnostic approach that empowers users to reflect upon and autonomously regulate algorithmic influences without requiring access to proprietary algorithmic internals.
📝 Abstract
Big Data analytics and Artificial Intelligence systems derive non-intuitive and often unverifiable inferences about individuals' behaviors, preferences, and private lives. Drawing on diverse, feature-rich datasets of unpredictable value, these systems erode the intuitive connection between our actions and how we are perceived, diminishing control over our digital identities. While Explainable Artificial Intelligence scholars have attempted to explain the inner workings of algorithms, their visualizations frequently overwhelm end-users with complexity. This research introduces 'hypothetical inference', a novel approach that uses language models to simulate how algorithms might interpret users' digital footprints and infer personal characteristics without requiring access to proprietary platform algorithms. Through empirical studies with fourteen adult participants, we identified three key design opportunities to foster critical algorithmic literacy: (1) reassembling scattered digital footprints into a unified map, (2) simulating algorithmic inference through LLM-generated interpretations, and (3) incorporating temporal dimensions to visualize evolving patterns. This research lays the groundwork for tools that can help users recognize the influence of data on platforms and develop greater autonomy in increasingly algorithm-mediated digital environments.