🤖 AI Summary
Deploying lightweight guardian models for mobile LLM safety is hindered by degraded knowledge distillation performance in small models due to scarcity of harmful instruction data. To address this, we propose HarmAug—a novel end-to-end pipeline for generating high-quality harmful instruction–response pairs. HarmAug synergistically integrates LLM jailbreaking techniques, conditional prefix prompting, and automated annotation by a teacher model, thereby significantly enhancing the diversity and discriminative power of distilled data. Trained on this augmented dataset, our 435M-parameter guardian model achieves F1 scores comparable to 7B+-scale models, surpasses baselines in AUPRC, reduces computational overhead by over 25%, and outperforms mainstream lightweight baselines in empirical evaluation. Our core contribution is the first systematic solution to knowledge distillation failure in resource-constrained settings with limited harmful instruction data—enabling effective, efficient, and deployable mobile LLM safety guardians.
📝 Abstract
Safety guard models that detect malicious queries aimed at large language models (LLMs) are essential for ensuring the secure and responsible deployment of LLMs in real-world applications. However, deploying existing safety guard models with billions of parameters alongside LLMs on mobile devices is impractical due to substantial memory requirements and latency. To reduce this cost, we distill a large teacher safety guard model into a smaller one using a labeled dataset of instruction-response pairs with binary harmfulness labels. Due to the limited diversity of harmful instructions in the existing labeled dataset, naively distilled models tend to underperform compared to larger models. To bridge the gap between small and large models, we propose HarmAug, a simple yet effective data augmentation method that involves jailbreaking an LLM and prompting it to generate harmful instructions. Given a prompt such as,"Make a single harmful instruction prompt that would elicit offensive content", we add an affirmative prefix (e.g.,"I have an idea for a prompt:") to the LLM's response. This encourages the LLM to continue generating the rest of the response, leading to sampling harmful instructions. Another LLM generates a response to the harmful instruction, and the teacher model labels the instruction-response pair. We empirically show that our HarmAug outperforms other relevant baselines. Moreover, a 435-million-parameter safety guard model trained with HarmAug achieves an F1 score comparable to larger models with over 7 billion parameters, and even outperforms them in AUPRC, while operating at less than 25% of their computational cost.