๐ค AI Summary
Smart contracts are vulnerable to manipulation attacks due to leakage of sensitive informationโa confidentiality flaw, not an implementation bug. To address this, we propose PartitionGPT, the first program partitioning method for smart contracts that synergistically integrates large language model (LLM) in-context learning with static analysis. Guided by only a few labeled sensitive variables, it automatically partitions contracts into privileged and unprivileged code modules. Our contributions include: (i) the first application of LLM in-context learning to secure contract partitioning; (ii) a fine-grained, formally verifiable, sensitivity-driven partitioning paradigm; and (iii) joint optimization for compilation correctness, security, and code conciseness. Evaluated on 18 annotated contracts containing 99 sensitive functions, PartitionGPT achieves 78% compilable and formally verifiable partitions while reducing code size by ~30%. It successfully mitigates 8 out of 9 real-world manipulation attacks, preventing over $25 million in losses.
๐ Abstract
Smart contracts are highly susceptible to manipulation attacks due to the leakage of sensitive information. Addressing manipulation vulnerabilities is particularly challenging because they stem from inherent data confidentiality issues rather than straightforward implementation bugs. To tackle this by preventing sensitive information leakage, we present PartitionGPT, the first LLM-driven approach that combines static analysis with the in-context learning capabilities of large language models (LLMs) to partition smart contracts into privileged and normal codebases, guided by a few annotated sensitive data variables. We evaluated PartitionGPT on 18 annotated smart contracts containing 99 sensitive functions. The results demonstrate that PartitionGPT successfully generates compilable, and verified partitions for 78% of the sensitive functions while reducing approximately 30% code compared to function-level partitioning approach. Furthermore, we evaluated PartitionGPT on nine real-world manipulation attacks that lead to a total loss of 25 million dollars, PartitionGPT effectively prevents eight cases, highlighting its potential for broad applicability and the necessity for secure program partitioning during smart contract development to diminish manipulation vulnerabilities.