๐ค AI Summary
AI risk detection is becoming increasingly challenging due to advancing model capabilities and alignment-faking behaviors. Method: This paper introduces implicit values as early-warning signals, proposing the LitmusValues evaluation framework and the AIRiskDilemmas moral-dilemma dataset. It quantifies LLMsโ preference orderings under value-conflict scenarios to uncover latent value priorities. Drawing on human values psychology, it employs value-guided prompting, multi-turn consistency modeling, adversarial moral evaluation, and cross-benchmark analysis with HarmBench. Contribution/Results: The framework accurately identifies risky behaviors on AIRiskDilemmas and generalizes to unseen risk types in HarmBench. Critically, it demonstrates that ostensibly neutral valuesโe.g., Careโcan robustly predict emergent risks such as power-seeking. This establishes value prioritization as a reliable, psychologically grounded early indicator of AI misalignment.
๐ Abstract
Detecting AI risks becomes more challenging as stronger models emerge and find novel methods such as Alignment Faking to circumvent these detection attempts. Inspired by how risky behaviors in humans (i.e., illegal activities that may hurt others) are sometimes guided by strongly-held values, we believe that identifying values within AI models can be an early warning system for AI's risky behaviors. We create LitmusValues, an evaluation pipeline to reveal AI models' priorities on a range of AI value classes. Then, we collect AIRiskDilemmas, a diverse collection of dilemmas that pit values against one another in scenarios relevant to AI safety risks such as Power Seeking. By measuring an AI model's value prioritization using its aggregate choices, we obtain a self-consistent set of predicted value priorities that uncover potential risks. We show that values in LitmusValues (including seemingly innocuous ones like Care) can predict for both seen risky behaviors in AIRiskDilemmas and unseen risky behaviors in HarmBench.