Zero-Trust Artificial Intelligence Model Security Based on Moving Target Defense and Content Disarm and Reconstruction

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses critical security risks—specifically deserialization-based code execution and model architecture tampering—posed by Pickle/PyTorch serialized files in AI model distribution. To mitigate these threats, we propose a zero-trust protection framework featuring a novel two-layer defense: (1) Content Disassembly and Reconstruction (CDR), which automatically strips malicious payloads during model loading; and (2) Mobile Target Defense (MTD), which dynamically perturbs model structure while jointly verifying the integrity of both weights and architecture. Our technical contributions include serialized format parsing and reconstruction, multi-granularity integrity verification, and lightweight runtime protection. Evaluated on real-world model repositories—including Hugging Face—we achieve 100% removal of malicious serialized payloads, effectively thwarting known model poisoning and deserialization attacks. The framework significantly enhances the trustworthiness of model file loading in physical distribution scenarios, where secure, offline deployment is essential.

Technology Category

Application Category

📝 Abstract
This paper examines the challenges in distributing AI models through model zoos and file transfer mechanisms. Despite advancements in security measures, vulnerabilities persist, necessitating a multi-layered approach to mitigate risks effectively. The physical security of model files is critical, requiring stringent access controls and attack prevention solutions. This paper proposes a novel solution architecture composed of two prevention approaches. The first is Content Disarm and Reconstruction (CDR), which focuses on disarming serialization attacks that enable attackers to run malicious code as soon as the model is loaded. The second is protecting the model architecture and weights from attacks by using Moving Target Defense (MTD), alerting the model structure, and providing verification steps to detect such attacks. The paper focuses on the highly exploitable Pickle and PyTorch file formats. It demonstrates a 100% disarm rate while validated against known AI model repositories and actual malware attacks from the HuggingFace model zoo.
Problem

Research questions and friction points this paper is trying to address.

Addresses vulnerabilities in AI model distribution via model zoos.
Proposes Content Disarm and Reconstruction to prevent serialization attacks.
Uses Moving Target Defense to protect model architecture and weights.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Content Disarm and Reconstruction for serialization attacks
Moving Target Defense for model architecture protection
Verification steps to detect and alert attacks
🔎 Similar Papers
No similar papers found.