🤖 AI Summary
JavaScript obfuscators are widely used for code protection, yet their semantic correctness has long been neglected—erroneous transformations may compromise functionality, security, and reliability. This paper introduces large language models (LLMs) to obfuscator semantic consistency evaluation for the first time. We propose a dual-path program sketch generation framework: (1) LLM-driven generation of diverse, semantically rich templates; and (2) automated extraction of executable sketches from real-world code. Integrated with mutation testing and formal equivalence verification, our approach systematically detects behavioral discrepancies between original and obfuscated programs. Under identical testing budgets, it uncovers previously unreported semantic errors in 11 mainstream obfuscators—errors missed by all existing fuzzing-based tools. Experimental results demonstrate that LLM-augmented sketch generation substantially enhances the depth and effectiveness of obfuscator correctness assessment.
📝 Abstract
JavaScript obfuscators are widely deployed to protect intellectual property and resist reverse engineering, yet their correctness has been largely overlooked compared to performance and resilience. Existing evaluations typically measure resistance to deobfuscation, leaving the critical question of whether obfuscators preserve program semantics unanswered. Incorrect transformations can silently alter functionality, compromise reliability, and erode security-undermining the very purpose of obfuscation. To address this gap, we present OBsmith, a novel framework to systematically test JavaScript obfuscators using large language models (LLMs). OBsmith leverages LLMs to generate program sketches abstract templates capturing diverse language constructs, idioms, and corner cases-which are instantiated into executable programs and subjected to obfuscation under different configurations. Besides LLM-powered sketching, OBsmith also employs a second source: automatic extraction of sketches from real programs. This extraction path enables more focused testing of project specific features and lets developers inject domain knowledge into the resulting test cases. OBsmith uncovers 11 previously unknown correctness bugs. Under an equal program budget, five general purpose state-of-the-art JavaScript fuzzers (FuzzJIT, Jsfunfuzz, Superion, DIE, Fuzzilli) failed to detect these issues, highlighting OBsmith's complementary focus on obfuscation induced misbehavior. An ablation shows that all components except our generic MRs contribute to at least one bug class; the negative MR result suggests the need for obfuscator-specific metamorphic relations. Our results also seed discussion on how to balance obfuscation presets and performance cost. We envision OBsmith as an important step towards automated testing and quality assurance of obfuscators and other semantic-preserving toolchains.