SECNEURON: Reliable and Flexible Abuse Control in Local LLMs via Hybrid Neuron Encryption

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Locally deployed large language models (LLMs) operate outside developer control, increasing misuse risks, while cloud-based security mechanisms are not directly transferable to on-device settings. Method: We propose the first model-intrinsic access control framework, integrating task-semantic-driven neuron disentanglement, hierarchical policy-tree modeling, and million-scale hybrid homomorphic/symmetric encryption over neurons. A novel ciphertext-domain distribution detection mechanism ensures partial decryption validity, with formal proofs of IND-CPA security and collusion-resistance. Results: Experiments show unauthorized task accuracy drops below 25%, while authorized task accuracy degrades by only 2%; malicious code generation falls from 59% to 15%, PII extraction drops below 5%, and membership inference degrades to random guessing—demonstrating fine-grained capability governance without compromising utility.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) with diverse capabilities are increasingly being deployed in local environments, presenting significant security and controllability challenges. These locally deployed LLMs operate outside the direct control of developers, rendering them more susceptible to abuse. Existing mitigation techniques mainly designed for cloud-based LLM services are frequently circumvented or ineffective in deployer-controlled environments. We propose SECNEURON, the first framework that seamlessly embeds classic access control within the intrinsic capabilities of LLMs, achieving reliable, cost-effective, flexible, and certified abuse control for local deployed LLMs. SECNEURON employs neuron-level encryption and selective decryption to dynamically control the task-specific capabilities of LLMs, limiting unauthorized task abuse without compromising others. We first design a task-specific neuron extraction mechanism to decouple logically related neurons and construct a layered policy tree for handling coupled neurons. We then introduce a flexible and efficient hybrid encryption framework for millions of neurons in LLMs. Finally, we developed a distribution-based decrypted neuron detection mechanism on ciphertext to ensure the effectiveness of partially decrypted LLMs. We proved that SECNEURON satisfies IND-CPA Security and Collusion Resistance Security under the Task Controllability Principle. Experiments on various task settings show that SECNEURON limits unauthorized task accuracy to below 25% while keeping authorized accuracy loss with 2%. Using an unauthorized Code task example, the accuracy of abuse-related malicious code generation was reduced from 59% to 15%. SECNEURON also mitigates unauthorized data leakage, reducing PII extraction rates to below 5% and membership inference to random guesses.
Problem

Research questions and friction points this paper is trying to address.

Control abuse in locally deployed LLMs
Prevent unauthorized task execution
Mitigate data leakage risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid neuron encryption for dynamic control
Task-specific neuron extraction mechanism
Distribution-based decrypted neuron detection
🔎 Similar Papers
No similar papers found.
Z
Zhiqiang Wang
University of Science and Technology of China
H
Haohua Du
Beihang University
J
Junyang Wang
University of Science and Technology of China
Haifeng Sun
Haifeng Sun
Associate Professor of Computer Science, Beijing University of Posts and Telecommunications
Natural language Processingintent based networkingNetAI
Kaiwen Guo
Kaiwen Guo
Synthesia
computer visionmachine learningcomputer graphics
H
Haikuo Yu
University of Science and Technology of China
C
Chao Liu
Ocean University of China
X
Xiang-Yang Li
University of Science and Technology of China