AI Policy Projector: Grounding LLM Policy Design in Iterative Mapmaking

📅 2024-09-26
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit open-ended, ill-defined behavioral spaces, posing significant challenges for safety governance. Method: This paper introduces *Policy Cartography*, a novel paradigm that treats the LLM input–output space as a geographic domain, enabling interpretable 2D visualization and interactive exploration. It supports safety practitioners in identifying high-risk regions (e.g., violent content), defining human-readable policies, and performing output rewriting or steering—without relying on fixed harm taxonomies. Contribution/Results: Unlike conventional approaches constrained by predefined harm categories, Policy Cartography enables progressive, dynamic coverage of open-domain real-world scenarios. Evaluated by 12 AI safety experts, the system successfully detected and mitigated novel harmful behaviors beyond current mainstream harm classification frameworks, demonstrating strong generalizability and practical utility for adaptive LLM safety governance.

Technology Category

Application Category

📝 Abstract
Whether a large language model policy is an explicit constitution or an implicit reward model, it is challenging to assess coverage over the unbounded set of real-world situations that a policy must contend with. We introduce an AI policy design process inspired by mapmaking, which has developed tactics for visualizing and iterating on maps even when full coverage is not possible. With Policy Projector, policy designers can survey the landscape of model input-output pairs, define custom regions (e.g.,"violence"), and navigate these regions with rules that can be applied to LLM outputs (e.g., if output contains"violence"and"graphic details,"then rewrite without"graphic details"). Policy Projector supports interactive policy authoring using LLM classification and steering and a map visualization reflecting the policy designer's work. In an evaluation with 12 AI safety experts, our system helps policy designers to address problematic model behaviors extending beyond an existing, comprehensive harm taxonomy.
Problem

Research questions and friction points this paper is trying to address.

Ensuring coverage over vast LLM behavior space
Designing effective navigation for AI policy boundaries
Addressing problematic LLM behaviors like safety threats
Innovation

Methods, ideas, or system contributions that make the work stand out.

Policy maps guide LLM behaviors via abstraction
Interactive tool designs custom policy regions
LLM classification and steering for policy authoring
🔎 Similar Papers
No similar papers found.