LASHED: LLMs And Static Hardware Analysis for Early Detection of RTL Bugs

📅 2025-04-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Static analysis for hardware security vulnerability detection at the RTL design stage suffers from weak semantic understanding, high false-positive rates, and uninterpretable security impact assessments. Method: This paper proposes the first large language model (LLM)-augmented hardware static analysis framework, integrating LLMs with conventional RTL static analyzers and SoC-level verification workflows. It introduces a novel “rethinking” mechanism and CWE-guided in-context learning prompt engineering to enable precise vulnerability semantic parsing, asset-aware analysis, and interpretable root-cause attribution. Contribution/Results: Evaluated on four open-source SoCs against five CWE-identified vulnerability classes, the framework achieves an 87.5% recommendation accuracy, substantially reduces false positives, and enhances both traceability and explainability of security impacts—bridging critical gaps between formal hardware analysis and AI-driven reasoning.

Technology Category

Application Category

📝 Abstract
While static analysis is useful in detecting early-stage hardware security bugs, its efficacy is limited because it requires information to form checks and is often unable to explain the security impact of a detected vulnerability. Large Language Models can be useful in filling these gaps by identifying relevant assets, removing false violations flagged by static analysis tools, and explaining the reported violations. LASHED combines the two approaches (LLMs and Static Analysis) to overcome each other's limitations for hardware security bug detection. We investigate our approach on four open-source SoCs for five Common Weakness Enumerations (CWEs) and present strategies for improvement with better prompt engineering. We find that 87.5% of instances flagged by our recommended scheme are plausible CWEs. In-context learning and asking the model to 'think again' improves LASHED's precision.
Problem

Research questions and friction points this paper is trying to address.

Combining LLMs and static analysis for RTL bug detection
Reducing false violations in hardware security analysis
Improving precision with in-context learning and prompting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines LLMs and static analysis for bug detection
Uses LLMs to explain and filter static analysis results
Improves precision with in-context learning techniques
🔎 Similar Papers
No similar papers found.