Busting the Paper Ballot: Voting Meets Adversarial Machine Learning

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes real-world adversarial security risks of machine learning classifiers deployed in U.S. optical scan voting systems for bubble-mark recognition. We identify gradient masking caused by numerical instability in standard loss functions and propose an improved differential log-ratio loss to mitigate it. To our knowledge, this is the first study to realize physically printable and scannable adversarial ballot attacks. Experiments span diverse models—including SVMs, CNNs (VGG, ResNet), and vision Transformers (Twins, CaiT)—and validate vulnerabilities under end-to-end physical scanning pipelines. Critically, injecting only 5% adversarial ballots suffices to flip election outcomes. Our work pioneers robustness analysis beyond the digital domain, extending it across the full physical voting chain—from ballot printing and human marking to optical scanning and tabulation. It establishes a novel paradigm and empirical benchmark for AI security assessment in electoral infrastructure.

Technology Category

Application Category

📝 Abstract
We show the security risk associated with using machine learning classifiers in United States election tabulators. The central classification task in election tabulation is deciding whether a mark does or does not appear on a bubble associated to an alternative in a contest on the ballot. Barretto et al. (E-Vote-ID 2021) reported that convolutional neural networks are a viable option in this field, as they outperform simple feature-based classifiers. Our contributions to election security can be divided into four parts. To demonstrate and analyze the hypothetical vulnerability of machine learning models on election tabulators, we first introduce four new ballot datasets. Second, we train and test a variety of different models on our new datasets. These models include support vector machines, convolutional neural networks (a basic CNN, VGG and ResNet), and vision transformers (Twins and CaiT). Third, using our new datasets and trained models, we demonstrate that traditional white box attacks are ineffective in the voting domain due to gradient masking. Our analyses further reveal that gradient masking is a product of numerical instability. We use a modified difference of logits ratio loss to overcome this issue (Croce and Hein, ICML 2020). Fourth, in the physical world, we conduct attacks with the adversarial examples generated using our new methods. In traditional adversarial machine learning, a high (50% or greater) attack success rate is ideal. However, for certain elections, even a 5% attack success rate can flip the outcome of a race. We show such an impact is possible in the physical domain. We thoroughly discuss attack realism, and the challenges and practicality associated with printing and scanning ballot adversarial examples.
Problem

Research questions and friction points this paper is trying to address.

Security risks in ML-based election tabulators
Vulnerability to adversarial attacks in voting systems
Impact of low success rate attacks on election outcomes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduced four new ballot datasets
Used modified difference of logits ratio loss
Conducted physical attacks with adversarial examples
Kaleel Mahmood
Kaleel Mahmood
Assistant Professor, University of Rhode Island
Adversarial Machine LearningMachine LearningComputer VisionSecurity
Caleb Manicke
Caleb Manicke
Research Assistant, University of Connecticut
Adversarial Machine LearningBiometricsOptimization
E
Ethan Rathbun
Northeastern University, Khoury College of Computer Sciences, Boston, Massachusetts, United States
A
Aayushi Verma
University of Connecticut, Voting Technology Center, Storrs, Connecticut, United States
S
Sohaib Ahmad
University of Connecticut, Voting Technology Center, Storrs, Connecticut, United States
N
Nicholas Stamatakis
Stony Brook University, Department of Computer Science, Stony Brook ,New York ,United States
Laurent Michel
Laurent Michel
CSE, University of Connecticut
Constraint ProgrammingDiscrete OptimizationVoting TechnologyForecasting
Benjamin Fuller
Benjamin Fuller
University of Connecticut
Applied CryptographyElection SecurityAuthenticationBiometricsKey Derivation