Beyond Reactive Safety: Risk-Aware LLM Alignment via Long-Horizon Simulation

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current language models lack systematic assessment of long-term, indirect societal harms arising from high-stakes social decision-making. Method: This paper proposes a macro-scale, long-horizon societal impact simulation framework that transcends traditional reactive safety paradigms. It introduces the first hundred-scenario benchmark specifically designed for evaluating indirect harms, integrating long-horizon causal modeling, risk propagation dynamics, and multi-stage impact reasoning, augmented with reinforcement learning for proactive alignment. Contribution/Results: The framework enables the first computationally tractable modeling and intervention of non-explicit, long-term societal consequences. It achieves over 20% performance improvement on the new benchmark and attains an average win rate exceeding 70% across established safety benchmarks—including AdvBench, SafeRLHF, and WildGuardMix—significantly enhancing long-term safety and trustworthiness of LMs in critical domains such as policy and healthcare.

Technology Category

Application Category

📝 Abstract
Given the growing influence of language model-based agents on high-stakes societal decisions, from public policy to healthcare, ensuring their beneficial impact requires understanding the far-reaching implications of their suggestions. We propose a proof-of-concept framework that projects how model-generated advice could propagate through societal systems on a macroscopic scale over time, enabling more robust alignment. To assess the long-term safety awareness of language models, we also introduce a dataset of 100 indirect harm scenarios, testing models' ability to foresee adverse, non-obvious outcomes from seemingly harmless user prompts. Our approach achieves not only over 20% improvement on the new dataset but also an average win rate exceeding 70% against strong baselines on existing safety benchmarks (AdvBench, SafeRLHF, WildGuardMix), suggesting a promising direction for safer agents.
Problem

Research questions and friction points this paper is trying to address.

Assessing long-term safety risks of LLM advice in societal systems
Improving alignment via simulation of indirect harm scenarios
Enhancing model foresight for non-obvious adverse outcomes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Long-horizon simulation for risk-aware LLM alignment
Dataset of 100 indirect harm scenarios
20% improvement on safety benchmarks
🔎 Similar Papers
No similar papers found.