🤖 AI Summary
This study examines the substantive impact of algorithmic surveillance false positives in air travel on passengers’ privacy and data protection rights. Method: Drawing on a nationally representative Finnish survey (N=1550), it integrates situational experiments, statistical modeling, and interdisciplinary ethical–legal analysis to develop, for the first time, a quantifiable framework for false-positive acceptability grounded in the theory of *contextual integrity*. Contribution/Results: Findings reveal that, even under extremely high false-positive rates, the public in high-trust societies exhibits broad tolerance—challenging individual-rights-centric regulatory paradigms. Moreover, the study identifies a systematic cognitive blind spot regarding third-party privacy harms. These results provide critical empirical evidence and normative critique for EU governance of algorithmic surveillance, advocating a regulatory shift from technical efficacy toward *contextual legitimacy*.
📝 Abstract
International air travel is highly surveilled. While surveillance is deemed necessary for law enforcement to prevent and detect terrorism and other serious crimes, even the most accurate algorithmic mass surveillance systems produce high numbers of false positives. Despite the potential impact of false positives on the fundamental rights of millions of passengers, algorithmic travel surveillance is lawful in the EU. However, as the system's processing practices and accuracy are kept secret by law, it is unknown to what degree passengers are accepting of the system's interference with their rights to privacy and data protection. We conducted a nationally representative survey of the adult population of Finland (N=1550) to assess their attitudes towards algorithmic mass surveillance in air travel and its potential expansion to other travel contexts. Furthermore, we developed a novel approach for estimating the threshold, beyond which, the number of false positives breaches individuals' perception of contextual integrity. Surprisingly, when faced with a trade-off between privacy and security, even very high false positive counts were perceived as legitimate. This result could be attributed to Finland's high-trust cultural context, but also raises questions about people's capacity to account for privacy harms that happen to other people. We conclude by discussing how legal and ethical approaches to legitimising algorithmic surveillance based on individual rights may overlook the statistical or systemic properties of mass surveillance.