SILENT: A New Lens on Statistics in Software Timing Side Channels

📅 2025-04-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In software timing side-channel analysis, developers often avoid empirical measurements due to the strong distributional and independence assumptions—and lack of formal statistical guarantees—underlying conventional statistical tests (e.g., Student’s t-test). Method: This paper introduces the first nonparametric statistical testing framework supporting dependency modeling, pre-specifiable sample size, and Δ-controllable false positive rate. It requires no prior distributional assumptions, employs adaptive sampling and leakage modeling, and formally defines a negligible leakage threshold Δ to rigorously bound false positives. Contribution/Results: Evaluated on synthetic benchmarks and real-world cryptographic libraries (e.g., OpenSSL), our method achieves significantly higher detection power than baseline approaches while maintaining zero false positives. To our knowledge, this is the first work to integrate formal statistical guarantees into empirical timing side-channel analysis, substantially enhancing result reliability and practical deployability.

Technology Category

Application Category

📝 Abstract
Cryptographic research takes software timing side channels seriously. Approaches to mitigate them include constant-time coding and techniques to enforce such practices. However, recent attacks like Meltdown [42], Spectre [37], and Hertzbleed [70] have challenged our understanding of what it means for code to execute in constant time on modern CPUs. To ensure that assumptions on the underlying hardware are correct and to create a complete feedback loop, developers should also perform emph{timing measurements} as a final validation step to ensure the absence of exploitable side channels. Unfortunately, as highlighted by a recent study by Jancar et al. [30], developers often avoid measurements due to the perceived unreliability of the statistical analysis and its guarantees. In this work, we combat the view that statistical techniques only provide weak guarantees by introducing a new algorithm for the analysis of timing measurements with strong, formal statistical guarantees, giving developers a reliable analysis tool. Specifically, our algorithm (1) is non-parametric, making minimal assumptions about the underlying distribution and thus overcoming limitations of classical tests like the t-test, (2) handles unknown data dependencies in measurements, (3) can estimate in advance how many samples are needed to detect a leak of a given size, and (4) allows the definition of a negligible leak threshold $Delta$, ensuring that acceptable non-exploitable leaks do not trigger false positives, without compromising statistical soundness. We demonstrate the necessity, effectiveness, and benefits of our approach on both synthetic benchmarks and real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Developers lack reliable statistical tools for timing side channel analysis
Existing methods fail to handle modern CPU timing vulnerabilities
Current statistical approaches provide weak guarantees for leak detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-parametric algorithm for timing analysis
Handles unknown data dependencies in measurements
Estimates required samples for leak detection
🔎 Similar Papers
No similar papers found.
Martin Dunsche
Martin Dunsche
Ruhr-Universität Bochum
Differential PrivacyStatistics
Patrick Bastian
Patrick Bastian
Ruhr-Universität Bochum
Mathematical Statistics
M
Marcel Maehren
Ruhr University Bochum, Germany
N
Nurullah Erinola
Ruhr University Bochum, Germany
Robert Merget
Robert Merget
Technology Innovation Institute (TII)
TLS
N
Nicolai Bissantz
Ruhr University Bochum, Germany
H
Holger Dette
Ruhr University Bochum, Germany
J
Jorg Schwenk
Ruhr University Bochum, Germany