🤖 AI Summary
Existing large language models lack systematic evaluation of reliability in generating Solidity smart contracts; prevailing benchmarks focus narrowly on isolated functions and synthetic inputs, failing to reflect real-world development scenarios. Method: We introduce SolContractEval—the first contract-level code generation benchmark—comprising 124 on-chain tasks across nine core domains. It innovatively integrates dynamic validation via historical transaction replay, structured contract scaffolding, full dependency-aware context modeling, and multi-developer cross-annotation. Contribution/Results: Experiments reveal that current models perform reasonably on standard tasks but exhibit significant deficiencies in handling complex logic, cross-contract dependencies, and Solidity-specific semantics (e.g., reentrancy, gas optimization). Among evaluated models, Claude-3.7-Sonnet achieves the highest overall performance. SolContractEval establishes a rigorous, realistic foundation for advancing trustworthy smart contract synthesis.
📝 Abstract
The rise of blockchain has brought smart contracts into mainstream use, creating a demand for smart contract generation tools. While large language models (LLMs) excel at generating code in general-purpose languages, their effectiveness on Solidity, the primary language for smart contracts, remains underexplored. Solidity constitutes only a small portion of typical LLM training data and differs from general-purpose languages in its version-sensitive syntax and limited flexibility. These factors raise concerns about the reliability of existing LLMs for Solidity code generation. Critically, existing evaluations, focused on isolated functions and synthetic inputs, fall short of assessing models' capabilities in real-world contract development.
To bridge this gap, we introduce SolContractEval, the first contract-level benchmark for Solidity code generation. It comprises 124 tasks drawn from real on-chain contracts across nine major domains. Each task input, consisting of complete context dependencies, a structured contract framework, and a concise task prompt, is independently annotated and cross-validated by experienced developers. To enable precise and automated evaluation of functional correctness, we also develop a dynamic evaluation framework based on historical transaction replay. Building on SolContractEval, we perform a systematic evaluation of six mainstream LLMs. We find that Claude-3.7-Sonnet achieves the highest overall performance, though evaluated models underperform relative to their capabilities on class-level generation tasks in general-purpose programming languages. Second, current models perform better on tasks that follow standard patterns but struggle with complex logic and inter-contract dependencies. Finally, they exhibit limited understanding of Solidity-specific features and contextual dependencies.