🤖 AI Summary
Decentralized finance (DeFi) platforms suffer billions of dollars in annual losses due to exploitable business logic and accounting vulnerabilities; existing defenses—including static analysis, mempool filtering, and off-chain monitoring—fail against private relay submissions or same-block malicious contract attacks. Method: We propose the first fully on-chain, decentralized machine learning defense framework: computationally intensive model training occurs off-chain on Layer-2, with verified lightweight updates propagated to Layer-1; inference is executed within smart contracts under strict gas constraints and low latency. We introduce Proof-of-Improvement (PoIm), a novel protocol enforcing that micro-updates strictly improve key security metrics, backed by economic penalties, and supporting multi-model inference with bit-level on-chain/off-chain consistency. Contributions: Via quantization and loop unrolling, we achieve, for the first time, Ethereum-gas-compliant on-chain inference for logistic regression, SVM, MLP, CNN, gated RNN, and formally verified decision trees—proved correct via Z3. Evaluated on 298 real-world attacks across eight EVM chains ($3.74B total loss), our framework demonstrates robustness against rapidly evolving threats.
📝 Abstract
Billions of dollars are lost every year in DeFi platforms by transactions exploiting business logic or accounting vulnerabilities. Existing defenses focus on static code analysis, public mempool screening, attacker contract detection, or trusted off-chain monitors, none of which prevents exploits submitted through private relays or malicious contracts that execute within the same block. We present the first decentralized, fully on-chain learning framework that: (i) performs gas-prohibitive computation on Layer-2 to reduce cost, (ii) propagates verified model updates to Layer-1, and (iii) enables gas-bounded, low-latency inference inside smart contracts. A novel Proof-of-Improvement (PoIm) protocol governs the training process and verifies each decentralized micro update as a self-verifying training transaction. Updates are accepted by extit{PoIm} only if they demonstrably improve at least one core metric (e.g., accuracy, F1-score, precision, or recall) on a public benchmark without degrading any of the other core metrics, while adversarial proposals get financially penalized through an adaptable test set for evolving threats. We develop quantization and loop-unrolling techniques that enable inference for logistic regression, SVM, MLPs, CNNs, and gated RNNs (with support for formally verified decision tree inference) within the Ethereum block gas limit, while remaining bit-exact to their off-chain counterparts, formally proven in Z3. We curate 298 unique real-world exploits (2020 - 2025) with 402 exploit transactions across eight EVM chains, collectively responsible for $3.74 B in losses.