Pluralistic Behavior Suite: Stress-Testing Multi-Turn Adherence to Custom Behavioral Policies

📅 2025-11-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While large language models (LLMs) must align with organization-specific policies, regulations, and values in real-world deployments, existing alignment evaluations predominantly assess compliance with generic safety principles, lacking systematic evaluation of diverse, customized behavioral policies. Method: We introduce CustomAlign-Bench—the first benchmark for evaluating multi-industry, multi-turn customized behavioral alignment—built upon 300 authentic industry-specific policies and featuring a dynamic adversarial evaluation framework that simulates high-pressure, multi-turn dialogues. Contribution/Results: Experiments reveal that mainstream LLMs exhibit <4% single-turn policy violation rates, yet this surges to 84% under multi-turn adversarial interaction, exposing severe degradation in policy adherence during complex dialogue. This work provides the first quantitative evidence of the limitations of current alignment techniques in realistic organizational settings, establishing a novel, scenario-aware, and sustainable alignment benchmark grounded in empirical validation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are typically aligned to a universal set of safety and usage principles intended for broad public acceptability. Yet, real-world applications of LLMs often take place within organizational ecosystems shaped by distinctive corporate policies, regulatory requirements, use cases, brand guidelines, and ethical commitments. This reality highlights the need for rigorous and comprehensive evaluation of LLMs with pluralistic alignment goals, an alignment paradigm that emphasizes adaptability to diverse user values and needs. In this work, we present PLURALISTIC BEHAVIOR SUITE (PBSUITE), a dynamic evaluation suite designed to systematically assess LLMs'capacity to adhere to pluralistic alignment specifications in multi-turn, interactive conversations. PBSUITE consists of (1) a diverse dataset of 300 realistic LLM behavioral policies, grounded in 30 industries; and (2) a dynamic evaluation framework for stress-testing model compliance with custom behavioral specifications under adversarial conditions. Using PBSUITE, We find that leading open- and closed-source LLMs maintain robust adherence to behavioral policies in single-turn settings (less than 4% failure rates), but their compliance weakens substantially in multi-turn adversarial interactions (up to 84% failure rates). These findings highlight that existing model alignment and safety moderation methods fall short in coherently enforcing pluralistic behavioral policies in real-world LLM interactions. Our work contributes both the dataset and analytical framework to support future research toward robust and context-aware pluralistic alignment techniques.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' adherence to diverse organizational behavioral policies
Testing model compliance with custom specifications under adversarial conditions
Assessing alignment weaknesses in multi-turn interactive conversations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic evaluation suite for multi-turn policy adherence
Dataset of 300 behavioral policies across 30 industries
Stress-testing model compliance under adversarial conversation conditions
🔎 Similar Papers
No similar papers found.