🤖 AI Summary
This study examines how AR beauty filters implicitly reinforce racialized, gendered, and ableist aesthetic norms through naming conventions, algorithmic bias, and platform governance, embedding digital body politics into everyday technological practice. Employing a critical technical studies approach, it integrates algorithmic auditing, digital body theory, and platform governance analysis to empirically investigate the aesthetic disciplining logic of mainstream AR filters. The research introduces the novel “transparency-oriented intervention” framework, advocating algorithmic explainability, decolonial naming practices, and reconfigured platform accountability as levers for critically redesigning AR aesthetics. It not only deconstructs filters as politically embedded—not neutral—technologies but also advances a theoretically grounded, practice-oriented model of algorithmic governance centered on fairness and bodily diversity. (149 words)
📝 Abstract
This position paper situates AR beauty filters within the broader debate on Body Politics in HCI. We argue that these filters are not neutral tools but technologies of governance that reinforce racialized, gendered, and ableist beauty standards. Through naming conventions, algorithmic bias, and platform governance, they impose aesthetic norms while concealing their influence. To address these challenges, we advocate for transparency-driven interventions and a critical rethinking of algorithmic aesthetics and digital embodiment.