LiteLMGuard: Seamless and Lightweight On-Device Prompt Filtering for Safeguarding Small Language Models against Quantization-induced Risks and Vulnerabilities

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Quantized small language models (SLMs) deployed on edge devices exhibit fairness, privacy, and security risks—including unintended harmful responses even without adversarial triggers—stemming from implicit safety degradation induced by model compression. Method: We propose the first prompt-answerability classification paradigm tailored for quantized SLMs on-device, enabling model-agnostic, zero-fine-tuning, zero-latency-coupling plug-and-play protection. Our approach leverages a lightweight ELECTRA-based classifier, trained on a high-quality, self-constructed Answerable-or-Not dataset, and optimized for on-device inference. Contribution/Results: Experiments show 97.75% answerability classification accuracy, >87% harmful prompt interception rate, 94% overall filtering accuracy, and only 135 ms average latency, with negligible on-device resource overhead. This work is the first to systematically address safety degradation in quantized SLMs, delivering a real-time, lightweight, and general-purpose pre-deployment safeguard for trustworthy edge AI.

Technology Category

Application Category

📝 Abstract
The growing adoption of Large Language Models (LLMs) has influenced the development of their lighter counterparts-Small Language Models (SLMs)-to enable on-device deployment across smartphones and edge devices. These SLMs offer enhanced privacy, reduced latency, server-free functionality, and improved user experience. However, due to resource constraints of on-device environment, SLMs undergo size optimization through compression techniques like quantization, which can inadvertently introduce fairness, ethical and privacy risks. Critically, quantized SLMs may respond to harmful queries directly, without requiring adversarial manipulation, raising significant safety and trust concerns. To address this, we propose LiteLMGuard (LLMG), an on-device prompt guard that provides real-time, prompt-level defense for quantized SLMs. Additionally, our prompt guard is designed to be model-agnostic such that it can be seamlessly integrated with any SLM, operating independently of underlying architectures. Our LLMG formalizes prompt filtering as a deep learning (DL)-based prompt answerability classification task, leveraging semantic understanding to determine whether a query should be answered by any SLM. Using our curated dataset, Answerable-or-Not, we trained and fine-tuned several DL models and selected ELECTRA as the candidate, with 97.75% answerability classification accuracy. Our safety effectiveness evaluations demonstrate that LLMG defends against over 87% of harmful prompts, including both direct instruction and jailbreak attack strategies. We further showcase its ability to mitigate the Open Knowledge Attacks, where compromised SLMs provide unsafe responses without adversarial prompting. In terms of prompt filtering effectiveness, LLMG achieves near state-of-the-art filtering accuracy of 94%, with an average latency of 135 ms, incurring negligible overhead for users.
Problem

Research questions and friction points this paper is trying to address.

Safeguarding quantized SLMs from fairness, ethical, and privacy risks
Preventing harmful query responses without adversarial manipulation
Providing real-time, model-agnostic on-device prompt filtering
Innovation

Methods, ideas, or system contributions that make the work stand out.

On-device prompt filtering for quantized SLMs
Model-agnostic design for seamless integration
DL-based answerability classification with high accuracy