🤖 AI Summary
To address the challenges of strategy memorization, poor generalization, and high deployment costs in content safety annotation tasks, this paper proposes a strategy-driven small language model (SLM) framework. Methodologically, we introduce *Contradictory Example Training*—a novel training paradigm that enhances the model’s understanding of nuanced safety policies; propose *Binocular Labeling*, a dual-perspective annotation method that automatically generates unambiguous, policy-aligned training data; and integrate instruction tuning with strategy-conditioned inference. In terms of contributions and results: the resulting model has only 9 billion parameters—just 1% the size of state-of-the-art large language models—yet achieves comparable or superior accuracy across seven major harm categories. It is fully deployable on a single consumer-grade GPU, significantly reducing inference latency and hardware requirements. The model and code are publicly released to foster reproducibility and community advancement.
📝 Abstract
This paper details the methodology behind CoPE, a policy-steerable small language model capable of fast and accurate content labeling. We present a novel training curricula called Contradictory Example Training that enables the model to learn policy interpretation rather than mere policy memorization. We also present a novel method for generating content policies, called Binocular Labeling, which enables rapid construction of unambiguous training datasets. When evaluated across seven different harm areas, CoPE exhibits equal or superior accuracy to frontier models at only 1% of their size. We openly release a 9 billion parameter version of the model that can be run on a single consumer-grade GPU. Models like CoPE represent a paradigm shift for classifier systems. By turning an ML task into a policy writing task, CoPE opens up new design possibilities for the governance of online platforms.