CoPE: A Small Language Model for Steerable and Scalable Content Labeling

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of strategy memorization, poor generalization, and high deployment costs in content safety annotation tasks, this paper proposes a strategy-driven small language model (SLM) framework. Methodologically, we introduce *Contradictory Example Training*—a novel training paradigm that enhances the model’s understanding of nuanced safety policies; propose *Binocular Labeling*, a dual-perspective annotation method that automatically generates unambiguous, policy-aligned training data; and integrate instruction tuning with strategy-conditioned inference. In terms of contributions and results: the resulting model has only 9 billion parameters—just 1% the size of state-of-the-art large language models—yet achieves comparable or superior accuracy across seven major harm categories. It is fully deployable on a single consumer-grade GPU, significantly reducing inference latency and hardware requirements. The model and code are publicly released to foster reproducibility and community advancement.

Technology Category

Application Category

📝 Abstract
This paper details the methodology behind CoPE, a policy-steerable small language model capable of fast and accurate content labeling. We present a novel training curricula called Contradictory Example Training that enables the model to learn policy interpretation rather than mere policy memorization. We also present a novel method for generating content policies, called Binocular Labeling, which enables rapid construction of unambiguous training datasets. When evaluated across seven different harm areas, CoPE exhibits equal or superior accuracy to frontier models at only 1% of their size. We openly release a 9 billion parameter version of the model that can be run on a single consumer-grade GPU. Models like CoPE represent a paradigm shift for classifier systems. By turning an ML task into a policy writing task, CoPE opens up new design possibilities for the governance of online platforms.
Problem

Research questions and friction points this paper is trying to address.

Develops a small language model for efficient content labeling
Introduces training to interpret policies, not just memorize them
Enables rapid creation of clear datasets for policy training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contradictory Example Training for policy interpretation
Binocular Labeling for unambiguous dataset generation
Small 9B parameter model for efficient GPU deployment
🔎 Similar Papers
No similar papers found.
S
Samidh Chakrabarti
Zentropi, USA
D
David Willner
Zentropi, USA
Kevin Klyman
Kevin Klyman
Stanford, Harvard
Foundation ModelsAI RegulationGeopolitics
T
Tiffany Saade
Stanford University, USA
E
Emily Capstick
Stanford University, USA
S
Sabina Nong
Stanford University, USA