Evaluating the Contextual Integrity of False Positives in Algorithmic Travel Surveillance

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study examines the substantive impact of algorithmic surveillance false positives in air travel on passengers’ privacy and data protection rights. Method: Drawing on a nationally representative Finnish survey (N=1550), it integrates situational experiments, statistical modeling, and interdisciplinary ethical–legal analysis to develop, for the first time, a quantifiable framework for false-positive acceptability grounded in the theory of *contextual integrity*. Contribution/Results: Findings reveal that, even under extremely high false-positive rates, the public in high-trust societies exhibits broad tolerance—challenging individual-rights-centric regulatory paradigms. Moreover, the study identifies a systematic cognitive blind spot regarding third-party privacy harms. These results provide critical empirical evidence and normative critique for EU governance of algorithmic surveillance, advocating a regulatory shift from technical efficacy toward *contextual legitimacy*.

Technology Category

Application Category

📝 Abstract
International air travel is highly surveilled. While surveillance is deemed necessary for law enforcement to prevent and detect terrorism and other serious crimes, even the most accurate algorithmic mass surveillance systems produce high numbers of false positives. Despite the potential impact of false positives on the fundamental rights of millions of passengers, algorithmic travel surveillance is lawful in the EU. However, as the system's processing practices and accuracy are kept secret by law, it is unknown to what degree passengers are accepting of the system's interference with their rights to privacy and data protection. We conducted a nationally representative survey of the adult population of Finland (N=1550) to assess their attitudes towards algorithmic mass surveillance in air travel and its potential expansion to other travel contexts. Furthermore, we developed a novel approach for estimating the threshold, beyond which, the number of false positives breaches individuals' perception of contextual integrity. Surprisingly, when faced with a trade-off between privacy and security, even very high false positive counts were perceived as legitimate. This result could be attributed to Finland's high-trust cultural context, but also raises questions about people's capacity to account for privacy harms that happen to other people. We conclude by discussing how legal and ethical approaches to legitimising algorithmic surveillance based on individual rights may overlook the statistical or systemic properties of mass surveillance.
Problem

Research questions and friction points this paper is trying to address.

Assessing public acceptance of algorithmic travel surveillance's privacy impact
Evaluating false positive thresholds in mass surveillance systems
Examining rights trade-offs in high-trust security contexts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nationally representative survey on surveillance attitudes
Novel approach estimating false positive thresholds
Examining privacy-security trade-offs in high-trust contexts
🔎 Similar Papers
No similar papers found.
Alina Wernick
Alina Wernick
Research Group Lead, The Law AI and Society Group, The CZS Institute for AI and Law
Law & TechnologyIP LawSociolegal ResearchArtificial IntelligenceHuman Rights
A
A. Medlar
Department of Computer Science, University of Helsinki, Helsinki, Finland
S
Sofia Soderholm
Legal Tech Lab, Faculty of Law, University of Helsinki, Helsinki, Finland
D
D. Głowacka
Department of Computer Science, University of Helsinki, Helsinki, Finland