Unified Framework for Qualifying Security Boundary of PUFs Against Machine Learning Attacks

πŸ“… 2026-01-08
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the lack of a unified theoretical framework and formal security guarantees in evaluating the resilience of physical unclonable functions (PUFs) against machine learning (ML) modeling attacks. We propose the first formal security analysis framework based on conditional probability estimation, leveraging information-theoretic and probabilistic modeling to define an adversary’s advantage as the ability to predict an unknown response given a set of challenge-response pairs. This approach establishes attack-model-agnostic lower bounds on PUF security, enabling theoretically grounded comparisons of ML attack resistance across different PUF architectures. The framework provides quantifiable and actionable security guarantees. Applying it, we systematically analyze the security limits of Arbiter, XOR, and Feed-Forward PUFs and present the first formal verification of the CT PUF, revealing its fundamental theoretical security bound.

Technology Category

Application Category

πŸ“ Abstract
Physical Unclonable Functions (PUFs) serve as lightweight, hardware-intrinsic entropy sources widely deployed in IoT security applications. However, delay-based PUFs are vulnerable to Machine Learning Attacks (MLAs), undermining their assumed unclonability. There are no valid metrics for evaluating PUF MLA resistance, but empirical modelling experiments, which lack theoretical guarantees and are highly sensitive to advances in machine learning techniques. To address the fundamental gap between PUF designs and security qualifications, this work proposes a novel, formal, and unified framework for evaluating PUF security against modelling attacks by providing security lower bounds, independent of specific attack models or learning algorithms. We mathematically characterise the adversary's advantage in predicting responses to unseen challenges based solely on observed challenge-response pairs (CRPs), formulating the problem as a conditional probability estimation over the space of candidate PUFs. We present our analysis on previous"broken"PUFs, e.g., Arbiter PUFs, XOR PUFs, Feed-Forward PUFs, and for the first time compare their MLA resistance in a formal way. In addition, we evaluate the currently"secure"CT PUF, and show its security boundary. We demonstrate that the proposed approach systematically quantifies PUF resilience, captures subtle security differences, and provides actionable, theoretically grounded security guarantees for the practical deployment of PUFs.
Problem

Research questions and friction points this paper is trying to address.

Physical Unclonable Functions
Machine Learning Attacks
Security Evaluation
Modelling Attacks
Security Boundary
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physical Unclonable Functions
Machine Learning Attacks
Security Evaluation Framework
Conditional Probability Estimation
Security Lower Bound
πŸ”Ž Similar Papers
No similar papers found.
H
Hongming Fei
National University of Singapore, Singapore
Z
Zilong Hu
National University of Singapore, Singapore
P
P. Gope
The University of Sheffield, Sheffield, UK
Biplab Sikdar
Biplab Sikdar
Provost's Chair Professor, National University of Singapore
Internet of ThingsCyber-Physical SystemsComputer Networks