🤖 AI Summary
This study identifies critical data security and privacy challenges faced by ethnic minorities in accessing key online public services—healthcare, social housing, and energy—in the UK. Drawing on 44 semi-structured interviews and guided by Critical Race Theory (CRT), it conducts thematic analysis to systematically uncover how institutional and digital racism exacerbate high-risk data disclosure behaviors. Three core patterns emerge: (1) “privacy as primary concern,” (2) “selective or inconsistent data disclosure,” and (3) “behavioral adaptation driven by institutional distrust.” Building on these findings, the study introduces the original “marginalization-aware design” framework, reconceptualizing privacy as agency—i.e., as a mechanism for empowerment rather than mere protection. This framework provides empirically grounded, actionable design principles to advance anti-discriminatory, inclusive, and privacy-by-design digital public services.
📝 Abstract
Minoritised ethnic people are marginalised in society, and therefore at a higher risk of adverse online harms, including those arising from the loss of security and privacy of personal data. Despite this, there has been very little research focused on minoritised ethnic people's security and privacy concerns, attitudes, and behaviours. In this work, we provide the results of one of the first studies in this regard. We explore minoritised ethnic people's experiences of using essential online services across three sectors: health, social housing, and energy, their security and privacy-related concerns, and responses towards these services. We conducted a thematic analysis of 44 semi-structured interviews with people of various reported minoritised ethnicities in the UK. Privacy concerns and lack of control over personal data emerged as a major theme, with many interviewees considering privacy as their most significant concern when using online services. Several creative tactics to exercise some agency were reported, including selective and inconsistent disclosure of personal data. A core concern about how data may be used was driven by a fear of repercussions, including penalisation and discrimination, influenced by prior experiences of institutional and online racism. The increased concern and potential for harm resulted in minoritised ethnic people grappling with a higher-stakes dilemma of whether to disclose personal information online or not. Furthermore, trust in institutions, or lack thereof, was found to be embedded throughout as a basis for adapting behaviour. We draw on our results to provide lessons learned for the design of more inclusive, marginalisation-aware, and privacy-preserving online services.