๐ค AI Summary
To address the challenge of aligning large language model (LLM) agents with stakeholder preferences in long-horizon tasks, this paper proposes a runtime-configurable and interpretable alignment framework. It dynamically models preferences as natural language scoring criteria and, for the first time, formalizes alignment as a multi-agent collaborative task. Methodologically, it introduces a utility-theoretic scoring criterion learning mechanism enabling real-time preference updates without fine-tuning, and proposes Regularized Group Sequence Policy Optimization (GSPO), which jointly optimizes natural language generation and multi-step reasoning. The framework is trained and validated on the GDPVal benchmark comprising 219 human-annotated criteria. Experiments demonstrate that the generated scoring criteria are concise, human-readable, and enable flexible trade-offs across dimensions such as correctness and brevity. This significantly enhances transparency, auditability, and adaptability of the alignment process.
๐ Abstract
As agents based on large language models are increasingly deployed to long-horizon tasks, maintaining their alignment with stakeholder preferences becomes critical. Effective alignment in such settings requires reward models that are interpretable so that stakeholders can understand and audit model objectives. Moreover, reward models must be capable of steering agents at interaction time, allowing preference shifts to be incorporated without retraining. We introduce ARCANE, a framework that frames alignment as a multi-agent collaboration problem that dynamically represents stakeholder preferences as natural-language rubrics: weighted sets of verifiable criteria that can be generated on-the-fly from task context. Inspired by utility theory, we formulate rubric learning as a reconstruction problem and apply a regularized Group-Sequence Policy Optimization (GSPO) procedure that balances interpretability, faithfulness, and computational efficiency. Using a corpus of 219 labeled rubrics derived from the GDPVal benchmark, we evaluate ARCANE on challenging tasks requiring multi-step reasoning and tool use. The learned rubrics produce compact, legible evaluations and enable configurable trade-offs (e.g., correctness vs. conciseness) without retraining. Our results show that rubric-based reward models offer a promising path toward interpretable, test-time adaptive alignment for complex, long-horizon AI systems.