Generation of Programmatic Rules for Document Forgery Detection Using Large Language Models

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Document forgery poses an escalating threat to security-critical systems, yet manually authoring robust, generalizable validation rules remains labor-intensive and error-prone. To address this, we propose the first lightweight LLM-based framework for automated security rule generation tailored to safety-critical applications. Leveraging Llama 3.1-8B and OpenCoder-8B, our approach performs supervised fine-tuning on structured real-world document data under hardware constraints, producing executable, verifiable Python validation functions. The generated rules are both accurate and interpretable, and integrate seamlessly into existing verification pipelines. Experiments demonstrate high rule coverage and superior accuracy, alongside strong zero-shot anomaly detection capability against unseen forgery patterns. Our framework significantly enhances the scalability of detection systems and improves engineer productivity, empirically validating the practical utility of open-source small language models for automated security rule synthesis.

Technology Category

Application Category

📝 Abstract
Document forgery poses a growing threat to legal, economic, and governmental processes, requiring increasingly sophisticated verification mechanisms. One approach involves the use of plausibility checks, rule-based procedures that assess the correctness and internal consistency of data, to detect anomalies or signs of manipulation. Although these verification procedures are essential for ensuring data integrity, existing plausibility checks are manually implemented by software engineers, which is time-consuming. Recent advances in code generation with large language models (LLMs) offer new potential for automating and scaling the generation of these checks. However, adapting LLMs to the specific requirements of an unknown domain remains a significant challenge. This work investigates the extent to which LLMs, adapted on domain-specific code and data through different fine-tuning strategies, can generate rule-based plausibility checks for forgery detection on constrained hardware resources. We fine-tune open-source LLMs, Llama 3.1 8B and OpenCoder 8B, on structured datasets derived from real-world application scenarios and evaluate the generated plausibility checks on previously unseen forgery patterns. The results demonstrate that the models are capable of generating executable and effective verification procedures. This also highlights the potential of LLMs as scalable tools to support human decision-making in security-sensitive contexts where comprehensibility is required.
Problem

Research questions and friction points this paper is trying to address.

Automates generation of rule-based checks for document forgery detection
Adapts large language models to domain-specific forgery detection tasks
Evaluates fine-tuned models' ability to create executable verification procedures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned LLMs generate rule-based plausibility checks
Adapted models create executable forgery detection procedures
Domain-specific fine-tuning enables scalable verification automation
🔎 Similar Papers
No similar papers found.