🤖 AI Summary
Large language models often struggle to maintain stable alignment in complex scenarios due to conflicts between embedded values and explicit instructions. This work proposes a priority graph—a novel formalism that captures the dynamic preference structure of LLMs across varying contexts—thereby revealing inconsistencies in alignment behavior and introducing a new class of safety vulnerability termed “priority hijacking.” To mitigate this risk, the authors develop a runtime verification mechanism that integrates external knowledge retrieval with context-grounded validation, enhancing model robustness through analysis of output distributions. Experimental results demonstrate the inherent difficulty of achieving uniformly stable alignment and validate the effectiveness of the proposed approach in defending against context-manipulation attacks.
📝 Abstract
As Large Language Models (LLMs) become more powerful and autonomous, they increasingly face conflicts and dilemmas in many scenarios. We first summarize and taxonomize these diverse conflicts. Then, we model the LLM's preferences to make different choices as a priority graph, where instructions and values are nodes, and the edges represent context-specific priorities determined by the model's output distribution. This graph reveals that a unified stable LLM alignment is very challenging, because the graph is neither static nor necessarily consistent in different contexts. Besides, it also reveals a potential vulnerability: priority hacking, where adversaries can craft deceptive contexts to manipulate the graph and bypass safety alignments. To counter this, we propose a runtime verification mechanism, enabling LLMs to query external sources to ground their context and resist manipulation. While this approach enhances robustness, we also acknowledge that many ethical and value dilemmas are philosophically irreducible, posing a long-term, open challenge for the future of AI alignment.