🤖 AI Summary
Existing code generation research primarily focuses on general-purpose languages (e.g., Python, Java), neglecting Solidity—the core programming language for Ethereum smart contracts—and lacks systematic, security- and efficiency-aware evaluation benchmarks tailored to it.
Method: We introduce SolEval, the first repository-level Solidity code generation benchmark, comprising 1,125 real-world GitHub projects across six application domains. SolEval uniquely integrates dual evaluation dimensions—gas consumption (runtime cost) and security (vulnerability rate)—quantified via static analysis and on-chain simulation.
Contribution/Results: Evaluating ten mainstream large language models (LLMs), we find the best-performing model achieves only 26.29% Pass@10, exposing critical limitations in generating secure, gas-efficient Solidity contracts. SolEval establishes the first rigorous, blockchain-specific evaluation framework for LLM-based Solidity code generation, addressing a key gap in both programming language and blockchain AI research.
📝 Abstract
Large language models (LLMs) have transformed code generation. However, most existing approaches focus on mainstream languages such as Python and Java, neglecting the Solidity language, the predominant programming language for Ethereum smart contracts. Due to the lack of adequate benchmarks for Solidity, LLMs' ability to generate secure, cost-effective smart contracts remains unexplored. To fill this gap, we construct SolEval, the first repository-level benchmark designed for Solidity smart contract generation, to evaluate the performance of LLMs on Solidity. SolEval consists of 1,125 samples from 9 different repositories, covering 6 popular domains, providing LLMs with a comprehensive evaluation benchmark. Unlike the existing Solidity benchmark, SolEval not only includes complex function calls but also reflects the real-world complexity of the Ethereum ecosystem by incorporating gas fee and vulnerability rate. We evaluate 10 LLMs on SolEval, and our results show that the best-performing LLM achieves only 26.29% Pass@10, highlighting substantial room for improvement in Solidity code generation by LLMs. We release our data and code at https://anonymous.4open.science/r/SolEval-1C06/.