🤖 AI Summary
Smart contract vulnerabilities cause substantial financial losses annually, yet existing automated analysis tools struggle to generate deployable defensive patches. This paper introduces the first end-to-end defense synthesis framework leveraging a domain-adapted large language model (LLM), requiring no vulnerability labels, symbolic execution, or human intervention. Through intermediate infilling-based supervised fine-tuning, the framework automatically extracts runtime invariants from large-scale real-world contracts and synthesizes Solidity-formatted `require` guard statements. The synthesized guards achieve a 96.7% compilation success rate; among 5,000 test cases, 44.5% are semantically equivalent to ground-truth invariants, and the guards successfully block 22 of 108 real-world attacks—including the APEMAGA incident. Our core contribution is the first application of LLMs to fully automated, unsupervised synthesis of production-grade safety assertions.
📝 Abstract
Smart contract vulnerabilities cost billions of dollars annually, yet existing automated analysis tools fail to generate deployable defenses. We present FLAMES, a novel automated approach that synthesizes executable runtime guards as Solidity "require" statements to harden smart contracts against exploits. Unlike prior work that relies on vulnerability labels, symbolic analysis, or natural language specifications, FLAMES employs domain-adapted large language models trained through fill-in-the-middle supervised fine-tuning on real-world invariants extracted from 514,506 verified contracts. Our extensive evaluation across three dimensions demonstrates FLAMES's effectiveness: (1) Compilation: FLAMES achieves 96.7% compilability for synthesized invariant (2) Semantic Quality: on a curated test set of 5,000 challenging invariants, FLAMES produces exact or semantically equivalent matches to ground truth in 44.5% of cases; (3) Exploit Mitigation: FLAMES prevents 22 out of 108 real exploits (20.4%) while preserving contract functionality, and (4) FLAMES successfully blocks the real-world APEMAGA incident by synthesizing a pre-condition that mitigates the attack. FLAMES establishes that domain-adapted LLMs can automatically generate production-ready security defenses for smart contracts without requiring vulnerability detection, formal specifications, or human intervention. We release our code, model weights, datasets, and evaluation infrastructure to enable reproducible research in this critical domain.