Why am I seeing this: Democratizing End User Auditing for Online Content Recommendations

📅 2024-10-07
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Contemporary recommender systems personalize content using user attributes or inferred data but suffer from poor explainability and auditability, hindering users’ ability to make informed privacy decisions and undermining algorithmic accountability. This paper introduces the first lightweight, privacy-preserving, end-user–oriented algorithmic auditing paradigm: an interactive sandbox that enables users to actively formulate hypotheses—via synthetically generated user profiles and behavioral data—and observe system responses (e.g., ad delivery) in real time, transforming black-box attribution into a verifiable hypothesis-testing process. The approach integrates synthetic data modeling, a user-facing sandbox interface, and an A/B-style response observation framework. A user study demonstrates significant improvements in users’ comprehension of recommendation logic, attribution accuracy, and privacy decision-making capability; in advertising scenarios, hypothesis validation success reached 92%.

Technology Category

Application Category

📝 Abstract
Personalized recommendation systems tailor content based on user attributes, which are either provided or inferred from private data. Research suggests that users often hypothesize about reasons behind contents they encounter (e.g.,"I see this jewelry ad because I am a woman"), but they lack the means to confirm these hypotheses due to the opaqueness of these systems. This hinders informed decision-making about privacy and system use and contributes to the lack of algorithmic accountability. To address these challenges, we introduce a new interactive sandbox approach. This approach creates sets of synthetic user personas and corresponding personal data that embody realistic variations in personal attributes, allowing users to test their hypotheses by observing how a website's algorithms respond to these personas. We tested the sandbox in the context of targeted advertisement. Our user study demonstrates its usability, usefulness, and effectiveness in empowering end-user auditing in a case study of targeting ads.
Problem

Research questions and friction points this paper is trying to address.

Users cannot verify why they see specific content recommendations
Opaque systems hinder privacy and informed decision-making
Lack of tools for end-user auditing of algorithmic behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive sandbox for user hypothesis testing
Synthetic user personas simulate real variations
Enables auditing of recommendation algorithms
🔎 Similar Papers
No similar papers found.
C
Chaoran Chen
University of Notre Dame, USA
Leyang Li
Leyang Li
Unknown affiliation
AIGCRAG
L
Luke Cao
University of Notre Dame, USA
Y
Yanfang Ye
University of Notre Dame, USA
Tianshi Li
Tianshi Li
Assistant Professor, Northeastern University
Human-Computer InteractionPrivacyHuman-Centered AI Privacy
Yaxing Yao
Yaxing Yao
Assistant Professor at Johns Hopkins
PrivacyIoTsHCI
T
Toby Li
University of Notre Dame, USA