Filters of Identity: AR Beauty and the Algorithmic Politics of the Digital Body

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study examines how AR beauty filters implicitly reinforce racialized, gendered, and ableist aesthetic norms through naming conventions, algorithmic bias, and platform governance, embedding digital body politics into everyday technological practice. Employing a critical technical studies approach, it integrates algorithmic auditing, digital body theory, and platform governance analysis to empirically investigate the aesthetic disciplining logic of mainstream AR filters. The research introduces the novel “transparency-oriented intervention” framework, advocating algorithmic explainability, decolonial naming practices, and reconfigured platform accountability as levers for critically redesigning AR aesthetics. It not only deconstructs filters as politically embedded—not neutral—technologies but also advances a theoretically grounded, practice-oriented model of algorithmic governance centered on fairness and bodily diversity. (149 words)

Technology Category

Application Category

📝 Abstract
This position paper situates AR beauty filters within the broader debate on Body Politics in HCI. We argue that these filters are not neutral tools but technologies of governance that reinforce racialized, gendered, and ableist beauty standards. Through naming conventions, algorithmic bias, and platform governance, they impose aesthetic norms while concealing their influence. To address these challenges, we advocate for transparency-driven interventions and a critical rethinking of algorithmic aesthetics and digital embodiment.
Problem

Research questions and friction points this paper is trying to address.

AR beauty filters reinforce biased beauty standards
Algorithmic bias and platform governance impose norms
Need transparency and rethinking of digital aesthetics
Innovation

Methods, ideas, or system contributions that make the work stand out.

AR beauty filters as governance technologies
Transparency-driven interventions for bias
Critical rethink of algorithmic aesthetics
🔎 Similar Papers
No similar papers found.
Miriam Doh
Miriam Doh
PhD student, Université Libre de Bruxelles (ULB), Univerisité de Mons (UMONS)
Computer visionFace analysisTrustworthy AI
C
Corinna Canali
Design, Diversity, and New Commons Research Group, Universität der Künste Berlin, Weizenbaum Institute, Germany
N
Nuria Oliver
Ellis Alicante, Spain