🤖 AI Summary
Existing hardware-level Spectre defenses lack automated security verification tools during early microarchitectural design phases. Method: This paper proposes the first automated testing framework for speculative-execution defenses targeting the early design stage. It introduces a novel relational side-channel leakage detection framework integrated with a simulator adapter, combining model-driven correlation testing, attacker-customized observation models, state-space pruning, and vulnerability-enhanced sampling to significantly improve leakage search and amplification efficiency. Results: The framework systematically evaluates four mainstream defenses within three hours, reproduces three known vulnerabilities, and discovers six previously unknown side-channel leakage flaws. It further provides the first empirical evidence of critical security vulnerabilities in the open-source SpecLFB implementation. This work establishes an efficient, scalable, and early-stage verification paradigm for secure processor design.
📝 Abstract
In recent years, several hardware-based countermeasures proposed to mitigate Spectre attacks have been shown to be insecure. To enable the development of effective secure speculation countermeasures, we need easy-to-use tools that can automatically test their security guarantees early-on in the design phase to facilitate rapid prototyping. This paper develops AMuLeT, the first tool capable of testing secure speculation countermeasures for speculative leakage early in their design phase in simulators. Our key idea is to leverage model-based relational testing tools that can detect speculative leaks in commercial CPUs, and apply them to micro-architectural simulators to test secure speculation defenses. We identify and overcome several challenges, including designing an expressive yet realistic attacker observer model in a simulator, overcoming the slow simulation speed, and searching the vast micro-architectural state space for potential vulnerabilities. AMuLeT speeds up test throughput by more than 10x compared to a naive design and uses techniques to amplify vulnerabilities to uncover them within a limited test budget. Using AMuLeT, we launch for the first time, a systematic, large-scale testing campaign of four secure speculation countermeasures from 2018 to 2024--InvisiSpec, CleanupSpec, STT, and SpecLFB--and uncover 3 known and 6 unknown bugs and vulnerabilities, within 3 hours of testing. We also show for the first time that the open-source implementation of SpecLFB is insecure.