π€ AI Summary
This work addresses the lack of a unified theoretical framework and formal security guarantees in evaluating the resilience of physical unclonable functions (PUFs) against machine learning (ML) modeling attacks. We propose the first formal security analysis framework based on conditional probability estimation, leveraging information-theoretic and probabilistic modeling to define an adversaryβs advantage as the ability to predict an unknown response given a set of challenge-response pairs. This approach establishes attack-model-agnostic lower bounds on PUF security, enabling theoretically grounded comparisons of ML attack resistance across different PUF architectures. The framework provides quantifiable and actionable security guarantees. Applying it, we systematically analyze the security limits of Arbiter, XOR, and Feed-Forward PUFs and present the first formal verification of the CT PUF, revealing its fundamental theoretical security bound.
π Abstract
Physical Unclonable Functions (PUFs) serve as lightweight, hardware-intrinsic entropy sources widely deployed in IoT security applications. However, delay-based PUFs are vulnerable to Machine Learning Attacks (MLAs), undermining their assumed unclonability. There are no valid metrics for evaluating PUF MLA resistance, but empirical modelling experiments, which lack theoretical guarantees and are highly sensitive to advances in machine learning techniques. To address the fundamental gap between PUF designs and security qualifications, this work proposes a novel, formal, and unified framework for evaluating PUF security against modelling attacks by providing security lower bounds, independent of specific attack models or learning algorithms. We mathematically characterise the adversary's advantage in predicting responses to unseen challenges based solely on observed challenge-response pairs (CRPs), formulating the problem as a conditional probability estimation over the space of candidate PUFs. We present our analysis on previous"broken"PUFs, e.g., Arbiter PUFs, XOR PUFs, Feed-Forward PUFs, and for the first time compare their MLA resistance in a formal way. In addition, we evaluate the currently"secure"CT PUF, and show its security boundary. We demonstrate that the proposed approach systematically quantifies PUF resilience, captures subtle security differences, and provides actionable, theoretically grounded security guarantees for the practical deployment of PUFs.