🤖 AI Summary
To address the complexity and inefficiency in constructing reference models for integrated circuit functional verification, this paper proposes the first large language model (LLM)-driven agile reference model generation and verification platform. Methodologically, it introduces a novel hierarchical agile modeling and modular generation framework, integrating design specification parsing, hierarchical modeling, LLM-based hardware code generation, and co-synthesis with formal verification—thereby overcoming critical limitations in LLMs’ structural reasoning and reliability for hardware reference model synthesis. Experimental evaluation across 300 industrial design cases demonstrates a 55.02% improvement in reference model generation efficiency, a 9.18× increase in model capacity, and a 5.90× acceleration in design–verification iteration cycles. The platform significantly enhances model stability and functional correctness while enabling standardized design practices and closed-loop verification iteration.
📝 Abstract
As the complexity of integrated circuit designs continues to escalate, the functional verification becomes increasingly challenging. Reference models, critical for accelerating the verification process, are themselves becoming more intricate and time-consuming to develop. Despite the promise shown by large language models (LLMs) in code programming, effectively generating complex reference models remains a significant hurdle. To address these challenges, we introduce ChatModel, the first LLM-aided agile reference model generation and verification platform. ChatModel streamlines the transition from design specifications to fully functional reference models by integrating design standardization and hierarchical agile modeling. Employing a building-block generation strategy, it not only enhances the design capabilities of LLMs for reference models but also significantly boosts verification efficiency. We evaluated ChatModel on 300 designs of varying complexity, demonstrating substantial improvements in both efficiency and quality of reference model generation. ChatModel achieved a peak performance improvement of 55.02% compared to alternative methods, with notable enhancements in generation stability, and delivered a 9.18x increase in its capacity to produce reference model designs. Furthermore, it accelerated the iterative process of reference model design and validation by an average of 5.90x compared to traditional approaches. These results highlight the potential of ChatModel to significantly advance the automation of reference model generation and validation.