LIFT: Automating Symbolic Execution Optimization with Large Language Models for AI Networks

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dynamic symbolic execution (DSE) in distributed AI systems suffers from poor scalability, low efficiency, and limited capability to detect deep errors induced by complex network communication. Method: This paper proposes LIFT, the first framework to integrate large language models (LLMs) into semantics-preserving, context-sensitive intermediate representation (IR) functional equivalence transformations. LIFT employs a two-stage, semantically verifiable IR optimization pipeline to automatically enhance DSE’s analysis efficiency and generality. Results: Evaluated on real-world binaries, LIFT reduces bigtest execution time by 53.5% and random execution time by 10.24%, while significantly decreasing IR statement count, PUT instruction count, and temporary variable usage. This work establishes the first LLM-driven, formally verifiable IR optimization technique, introducing a novel paradigm for program analysis in distributed systems.

Technology Category

Application Category

📝 Abstract
Dynamic Symbolic Execution (DSE) is a key technique in program analysis, widely used in software testing, vulnerability discovery, and formal verification. In distributed AI systems, DSE plays a crucial role in identifying hard-to-detect bugs, especially those arising from complex network communication patterns. However, traditional approaches to symbolic execution are often hindered by scalability issues and inefficiencies, particularly in large-scale systems. This paper introduces LIFT (Large-language-model Integrated Functional-equivalent-IR Transformation), a novel framework that leverages Large Language Models (LLMs) to automate the optimization of Intermediate Representations (IRs) in symbolic execution. LIFT addresses the challenges of symbolic execution by providing a scalable, context-sensitive solution for IR transformation. The framework consists of two phases: IR Analysis and Optimization, where LLMs optimize time-intensive IR blocks, and Symbolic Execution and Validation, which includes benchmarking and semantic verification to ensure correctness and generalizability. Experiments on real-world binaries demonstrated significant performance improvements, including a 53.5% reduction in execution time for bigtest and a 10.24% reduction for random, along with reductions in IR statements, PUT instructions, and temporary variables. These results demonstrate that LLMs simplify IRs while maintaining functional correctness, enhancing symbolic execution in distributed AI systems.
Problem

Research questions and friction points this paper is trying to address.

Optimizing symbolic execution scalability in AI networks
Automating IR transformation using Large Language Models
Reducing execution time and IR complexity in DSE
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs automate IR optimization in symbolic execution
Scalable context-sensitive solution for IR transformation
Reduces execution time and IR statements significantly
🔎 Similar Papers
No similar papers found.
R
Ruoxi Wang
Northeastern University, Boston, MA, USA
K
Kun Li
Shandong University, Jinan, Shandong, China
M
Minghui Xu
Shandong University, Jinan, Shandong, China
Y
Yue Zhang
Shandong University, Jinan, Shandong, China
Kaidi Xu
Kaidi Xu
Associate Professor, City University of Hong Kong
AI SecurityUncertainty QuantificationFormal Verification
C
Chunchi Liu
Huawei Technologies Company, Ltd., Shenzhen, Guangdong, China
Yinhao Xiao
Yinhao Xiao
Guangdong University of Finance and Economics
System Security
Xiuzhen Cheng
Xiuzhen Cheng
School of Computer Science and Technology, Shandong University
BlockchainIoT SecurityEdge ComputingDistributed Computing