🤖 AI Summary
This work addresses the inefficiency and insufficient coverage in security verification of complex hardware systems by proposing the first structured integration of artificial intelligence and large language models (LLMs) across the entire hardware security verification pipeline—spanning asset identification, threat modeling, test generation, simulation analysis, formal verification, and countermeasure reasoning. Using the open-source NVDLA accelerator as a case study, the authors construct a multidimensional trustworthy verification framework that synergistically combines simulation, formal methods, and benchmark evaluation. Empirical results demonstrate that the proposed approach significantly accelerates the verification process and validates an effective, AI-driven pathway for hardware security assurance through the fusion of multiple evidence sources.
📝 Abstract
As hardware systems grow in complexity, security verification must keep up with them. Recently, artificial intelligence (AI) and large language models (LLMs) have started to play an important role in automating several stages of the verification workflow by helping engineers analyze designs, reason about potential threats, and generate verification artifacts. This survey synthesizes recent advances in AI-assisted hardware security verification and organizes the literature along key stages of the workflow: asset identification, threat modeling, security test-plan generation, simulation-driven analysis, formal verification, and countermeasure reasoning. To illustrate how these techniques can be applied in practice, we present a case study using the open-source NVIDIA Deep Learning Accelerator (NVDLA), a representative modern hardware design. Throughout this study, we emphasize that while AI/LLM-based automation can significantly accelerate verification tasks, its outputs must remain grounded in simulation evidence, formal reasoning, and benchmark-driven evaluation to ensure trustworthy hardware security assurance.